This relates generally to systems and methods of spatial groups within multi-user communication sessions.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some examples, the three-dimensional environments are presented by multiple devices communicating in a multi-user communication session. In some examples, an avatar (e.g., a representation) of each user participating in the multi-user communication session (e.g., via the computing devices) is displayed in the three-dimensional environment of the multi-user communication session. In some examples, content can be shared in the three-dimensional environment for viewing and interaction by multiple users participating in the multi-user communication session.
Some examples of the disclosure are directed to systems and methods for presenting content in a three-dimensional environment by one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device and the second electronic device are grouped in a first spatial group within the multi-user communication session. In some examples, the first electronic device displays an avatar corresponding to a user of the second electronic device in the three-dimensional environment, and the second electronic device displays an avatar corresponding to a user of the first electronic device in the three-dimensional environment. In some examples, an audio corresponding to a voice of the user of the first electronic device and the second electronic device, respectively, is presented with the avatar in the multi-user communication session. In some examples, the first electronic device and the second electronic device may share and present content in the three-dimensional environment. In some examples, if the second electronic device determines that the first electronic device changes states (and/or vice versa), the user of the first electronic device and the user of the second electronic device are no longer grouped into the same spatial group within the multi-user communication session. In some examples, when the users of the electronic devices are grouped into separate spatial groups in the multi-user communication session, the first electronic device replaces display of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, and the second electronic device replaces display of the avatar corresponding to the user of the first electronic device with a two-dimensional representation of the user of the first electronic device.
In some examples, while the first electronic device and the second electronic device are communicatively linked and grouped into a first spatial group (e.g., a baseline spatial group) within the multi-user communication session, the determination that one of the electronic devices has changed states is based on a manner in which the avatars and/or content is displayed in the shared three-dimensional environment. In some examples, while the avatars corresponding to the users of the first electronic device and the second electronic device are displayed, if the first electronic device activates an audio mode, which causes the avatar corresponding to the user of the first electronic device to no longer be displayed at the second electronic device, the first electronic device and the second electronic device are no longer operating in the same state. Accordingly, the user of the first electronic device is grouped into a second spatial group (e.g., an audio-only spatial group), separate from the first spatial group. In some examples, if the first electronic device displays content that is private and exclusive to the user of the first electronic device, the first electronic device and the second electronic device are no longer operating in the same state, which causes the user of the first electronic device to be grouped into a second spatial group (e.g., a private exclusive spatial group), separate from the first spatial group. In some examples, while the first electronic device and the second electronic device are displaying shared content in the three-dimensional environment, if the first electronic device displays the shared content in a full-screen mode, the first electronic device and the second electronic device are no longer operating in the same state. Accordingly, the user of the first electronic device is grouped into a second spatial group (e.g., a shared exclusive spatial group), separate from the first spatial group.
In some examples, a spatial group in the multi-user communication session has a spatial arrangement that dictates locations of users and content that are located in the spatial group. In some examples, users in the same spatial group within the multi-user communication session experience spatial truth according to the spatial arrangement of the spatial group. In some examples, when the user of the first electronic device is in a first spatial group and the user of the second electronic device is in a second spatial group in the multi-user communication session, the users experience spatial truth that is localized to their respective spatial groups. In some examples, while the user of the first electronic device and the user of the second electronic device are grouped into separate spatial groups within the multi-user communication session, if the first electronic device and the second electronic device return to the same operating state, the user of the first electronic device and the user of the second electronic device are regrouped into the same spatial group within the multi-user communication session.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for presenting content in a three-dimensional environment by one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device and the second electronic device are grouped in a first spatial group within the multi-user communication session. In some examples, the first electronic device displays an avatar corresponding to a user of the second electronic device in the three-dimensional environment, and the second electronic device displays an avatar corresponding to a user of the first electronic device in the three-dimensional environment. In some examples, an audio corresponding to a voice of the user of the first electronic device and the second electronic device, respectively, is presented with the avatar in the multi-user communication session. In some examples, the first electronic device and the second electronic device may share and present content in the three-dimensional environment. In some examples, if the second electronic device determines that the first electronic device changes states (and/or vice versa), the user of the first electronic device and the user of the second electronic device are no longer grouped into the same spatial group within the multi-user communication session. In some examples, when the users of the electronic devices are grouped into separate spatial groups in the multi-user communication session, the first electronic device replaces display of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, and the second electronic device replaces display of the avatar corresponding to the user of the first electronic device with a two-dimensional representation of the user of the first electronic device.
In some examples, while the first electronic device and the second electronic device are communicatively linked and grouped into a first spatial group (e.g., a baseline spatial group) within the multi-user communication session, the determination that one of the electronic devices has changed states is based on a manner in which the avatars and/or content is displayed in the shared three-dimensional environment. In some examples, while the avatars corresponding to the users of the first electronic device and the second electronic device are displayed, if the first electronic device activates an audio mode, which causes the avatar corresponding to the user of the first electronic device to no longer be displayed at the second electronic device, the first electronic device and the second electronic device are no longer operating in the same state. Accordingly, the user of the first electronic device is grouped into a second spatial group (e.g., an audio-only spatial group), separate from the first spatial group. In some examples, if the first electronic device displays content that is private and exclusive to the user of the first electronic device, the first electronic device and the second electronic device are no longer operating in the same state, which causes the user of the first electronic device to be grouped into a second spatial group (e.g., a private exclusive spatial group), separate from the first spatial group. In some examples, while the first electronic device and the second electronic device are displaying shared content in the three-dimensional environment, if the first electronic device displays the shared content in a full-screen mode, the first electronic device and the second electronic device are no longer operating in the same state. Accordingly, the user of the first electronic device is grouped into a second spatial group (e.g., a shared exclusive spatial group), separate from the first spatial group.
In some examples, a spatial group in the multi-user communication session has a spatial arrangement that dictates locations of users and content that are located in the spatial group. In some examples, users in the same spatial group within the multi-user communication session experience spatial truth according to the spatial arrangement of the spatial group. In some examples, when the user of the first electronic device is in a first spatial group and the user of the second electronic device is in a second spatial group in the multi-user communication session, the users experience spatial truth that is localized to their respective spatial groups. In some examples, while the user of the first electronic device and the user of the second electronic device are grouped into separate spatial groups within the multi-user communication session, if the first electronic device and the second electronic device return to the same operating state, the user of the first electronic device and the user of the second electronic device are regrouped into the same spatial group within the multi-user communication session.
In some examples, displaying content in the three-dimensional environment while in the multi-user communication session may include interaction with one or more user interface elements. In some examples, a user's gaze may be tracked by the electronic device as an input for targeting a selectable option/affordance within a respective user interface element that is displayed in the three-dimensional environment. For example, gaze can be used to identify one or more options/affordances targeted for selection using another selection input. In some examples, a respective option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
It should be understood that virtual object 110 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or three-dimensional virtual objects) can be included and rendered in a three-dimensional computer-generated environment. For example, the virtual object can represent an application, or a user interface displayed in the computer-generated environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the computer-generated environment. In some examples, the virtual object 110 is optionally configured to be interactive and responsive to user input, such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object. In some examples, the virtual object 110 may be displayed in a three-dimensional computer-generated environment within a multi-user communication session (“multi-user communication session,” “communication session”). In some such examples, as described in more detail below, the virtual object 110 may be viewable and/or configured to be interactive and responsive to multiple users and/or user input provided by multiple users, respectively. Additionally, it should be understood, that the 3D environment (or 3D virtual object) described herein may be a representation of a 3D environment (or three-dimensional virtual object) projected or presented at an electronic device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as a touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display, and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In some examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214A, 214B includes multiple displays. In some examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, a transparent or translucent display, etc. In some examples, devices 260 and 270 include touch-sensitive surface(s) 209A and 209B, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214A, 214B and touch-sensitive surface(s) 209A, 209B form touch-sensitive display(s) (e.g., a touch screen integrated with devices 260 and 270, respectively, or external to devices 260 and 270, respectively, that is in communication with devices 260 and 270).
Devices 260 and 270 optionally include image sensor(s) 206A and 206B, respectively. Image sensors(s) 206A/206B optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206A/206B also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206A/206B also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206A/206B also optionally include one or more depth sensors configured to detect the distance of physical objects from device 260/270. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, devices 260 and 270 use CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around devices 260 and 270. In some examples, image sensor(s) 206A/206B include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, device 260/270 uses image sensor(s) 206A/206B to detect the position and orientation of device 260/270 and/or display generation component(s) 214A/214B in the real-world environment. For example, device 260/270 uses image sensor(s) 206A/206B to track the position and orientation of display generation component(s) 214A/214B relative to one or more fixed objects in the real-world environment.
In some examples, device 260/270 includes microphone(s) 213A/213B or other audio sensors. Device 260/270 uses microphone(s) 213A/213B to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213A/213B includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
In some examples, device 260/270 includes location sensor(s) 204A/204B for detecting a location of device 260/270 and/or display generation component(s) 214A/214B. For example, location sensor(s) 204A/204B can include a GPS receiver that receives data from one or more satellites and allows device 260/270 to determine the device's absolute position in the physical world.
In some examples, device 260/270 includes orientation sensor(s) 210A/210B for detecting orientation and/or movement of device 260/270 and/or display generation component(s) 214A/214B. For example, device 260/270 uses orientation sensor(s) 210A/210B to track changes in the position and/or orientation of device 260/270 and/or display generation component(s) 214A/214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210A/210B optionally include one or more gyroscopes and/or one or more accelerometers.
Device 260/270 includes hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B, in some examples. Hand tracking sensor(s) 202A/202B are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214A/214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212A/212B are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214A/214B. In some examples, hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented together with the display generation component(s) 214A/214B. In some examples, the hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented separate from the display generation component(s) 214A/214B.
In some examples, the hand tracking sensor(s) 202A/202B can use image sensor(s) 206A/206B (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensor(s) 206A/206B are positioned relative to the user to define a field of view of the image sensor(s) 206A/206B and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212A/212B includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 260/270 and system 201 are not limited to the components and configuration of
As shown in
As mentioned above, in some examples, the first electronic device 360 is optionally in a multi-user communication session with the second electronic device 370. For example, the first electronic device 360 and the second electronic device 370 (e.g., via communication circuitry 222A/222B) are configured to present a shared three-dimensional environment 350A/350B that includes one or more shared virtual objects (e.g., content such as images, video, audio and the like, representations of user interfaces of applications, etc.). As used herein, the term “shared three-dimensional environment” refers to a three-dimensional environment that is independently presented, displayed, and/or visible at two or more electronic devices via which content, applications, data, and the like may be shared and/or presented to users of the two or more electronic devices. In some examples, while the first electronic device 360 is in the multi-user communication session with the second electronic device 370, an avatar corresponding to the user of one electronic device is optionally displayed in the three-dimensional environment that is displayed via the other electronic device. For example, as shown in
In some examples, the presentation of avatars 315/317 as part of a shared three-dimensional environment is optionally accompanied by an audio effect corresponding to a voice of the users of the electronic devices 370/360. For example, the avatar 315 displayed in the three-dimensional environment 350A using the first electronic device 360 is optionally accompanied by an audio effect corresponding to the voice of the user of the second electronic device 370. In some such examples, when the user of the second electronic device 370 speaks, the voice of the user may be detected by the second electronic device 370 (e.g., via the microphone(s) 213B) and transmitted to the first electronic device 360 (e.g., via the communication circuitry 222B/222A), such that the detected voice of the user of the second electronic device 370 may be presented as audio (e.g., using speaker(s) 216A) to the user of the first electronic device 360 in three-dimensional environment 350A. In some examples, the audio effect corresponding to the voice of the user of the second electronic device 370 may be spatialized such that it appears to the user of the first electronic device 360 to emanate from the location of avatar 315 in the shared three-dimensional environment 350A (e.g., despite being outputted from the speakers of the first electronic device 360). Similarly, the avatar 317 displayed in the three-dimensional environment 350B using the second electronic device 370 is optionally accompanied by an audio effect corresponding to the voice of the user of the first electronic device 360. In some such examples, when the user of the first electronic device 360 speaks, the voice of the user may be detected by the first electronic device 360 (e.g., via the microphone(s) 213A) and transmitted to the second electronic device 370 (e.g., via the communication circuitry 222A/222B), such that the detected voice of the user of the first electronic device 360 may be presented as audio (e.g., using speaker(s) 216B) to the user of the second electronic device 370 in three-dimensional environment 350B. In some examples, the audio effect corresponding to the voice of the user of the first electronic device 360 may be spatialized such that it appears to the user of the second electronic device 370 to emanate from the location of avatar 317 in the shared three-dimensional environment 350B (e.g., despite being outputted from the speakers of the first electronic device 360).
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with respective orientations that correspond to and/or are based on orientations of the electronic devices 360/370 (and/or the users of electronic devices 360/370) in the physical environments surrounding the electronic devices 360/370. For example, as shown in
Additionally, in some examples, while in the multi-user communication session, a viewpoint of the three-dimensional environments 350A/350B and/or a location of the viewpoint of the three-dimensional environments 350A/350B optionally changes in accordance with movement of the electronic devices 360/370 (e.g., by the users of the electronic devices 360/370). For example, while in the communication session, if the first electronic device 360 is moved closer toward the representation of the table 306′ and/or the avatar 315 (e.g., because the user of the first electronic device 360 moved forward in the physical environment surrounding the first electronic device 360), the viewpoint of the three-dimensional environment 350A would change accordingly, such that the representation of the table 306′, the representation of the window 309′ and the avatar 315 appear larger in the field of view. In some examples, each user may independently interact with the three-dimensional environment 350A/350B, such that changes in viewpoints of the three-dimensional environment 350A and/or interactions with virtual objects in the three-dimensional environment 350A by the first electronic device 360 optionally do not affect what is shown in the three-dimensional environment 350B at the second electronic device 370, and vice versa.
In some examples, the avatars 315/317 are a representation (e.g., a full-body rendering) of the users of the electronic devices 370/360. In some examples, the avatar 315/317 is a representation of a portion (e.g., a rendering of a head, face, head and torso, etc.) of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are a user-personalized, user-selected, and/or user-created representation displayed in the three-dimensional environments 350A/350B that is representative of the users of the electronic devices 370/360. It should be understood that, while the avatars 315/317 illustrated in
As mentioned above, while the first electronic device 360 and the second electronic device 370 are in the multi-user communication session, the three-dimensional environments 350A/350B may be a shared three-dimensional environment that is presented using the electronic devices 360/370. In some examples, content that is viewed by one user at one electronic device may be shared with another user at another electronic device in the multi-user communication session. In some such examples, the content may be experienced (e.g., viewed and/or interacted with) by both users (e.g., via their respective electronic devices) in the shared three-dimensional environment. For example, as shown in
In some examples, the three-dimensional environments 350A/350B include unshared content that is private to one user in the multi-user communication session. For example, in
As mentioned previously above, in some examples, the user of the first electronic device 360 and the user of the second electronic device 370 are in a spatial group 340 within the multi-user communication session. In some examples, the spatial group 340 may be a baseline (e.g., a first or default) spatial group within the multi-user communication session. For example, when the user of the first electronic device 360 and the user of the second electronic device 370 initially join the multi-user communication session, the user of the first electronic device 360 and the user of the second electronic device 370 are automatically (and initially, as discussed in more detail below) associated with (e.g., grouped into) the spatial group 340 within the multi-user communication session. In some examples, while the users are in the spatial group 340 as shown in
It should be understood that, in some examples, more than two electronic devices may be communicatively linked in a multi-user communication session. For example, in a situation in which three electronic devices are communicatively linked in a multi-user communication session, a first electronic device would display two avatars, rather than just one avatar, corresponding to the users of the other two electronic devices. It should therefore be understood that the various processes and exemplary interactions described herein with reference to the first electronic device 360 and the second electronic device 370 in the multi-user communication session optionally apply to situations in which more than two electronic devices are communicatively linked in a multi-user communication session.
In some examples, it may be advantageous to selectively control the display of the avatars corresponding to the users of electronic devices that are communicatively linked in a multi-user communication session. For example, as described herein, content may be shared and presented in the three-dimensional environment such that the content is optionally viewable by and/or interactive to multiple users in the multi-user communication session. As discussed above, the three-dimensional environment optionally includes avatars corresponding to the users of the electronic devices that are in the communication session. In some instances, the presentation of the content in the three-dimensional environment with the avatars corresponding to the users of the electronic devices may cause portions of the content to be blocked or obscured from a viewpoint of one or more users in the multi-user communication session and/or may distract one or more users in the multi-user communication session. In some examples, the presentation of the content and/or a change in the presentation of the content in the three-dimensional environment corresponds to a change of a state of a first electronic device presenting the content. In some examples, in response to detecting the change of the state of the first electronic device, the user of the first electronic device becomes associated with a second spatial group that is separate from the baseline spatial group (e.g., 340) discussed above in the multi-user communication session. Additionally, in some examples, it may be advantageous to, when the users of the electronic devices are associated with different spatial groups within the multi-user communication session, cease display of the avatars corresponding to the users of the electronic devices depending on the type of content that is being presented, as described herein in more detail.
As similarly described above with reference to
In some examples, the user of the first electronic device 460 and the user of the second electronic device 470 become associated with (e.g., located in) different spatial groups within the multi-user communication session when one of the electronic devices changes state. For example, if one of the electronic device changes states, the electronic device transmits an indication (e.g., directly or indirectly) to the other electronic device(s) in the multi-user communication session indicating that the electronic device has changed state. As described in more detail herein, an electronic device in the multi-user communication session changes state when presentation of an avatar corresponding to the user of the electronic device and/or presentation of content in the shared three-dimensional environment change.
In some examples, as discussed below, an electronic device in the multi-user communication session changes states when the presentation of the avatar corresponding to the user of the electronic device changes in the shared three-dimensional environment. As an example, in
In some examples, in response to receiving the selection input 472A, the first electronic device 460 activates the audio mode at the first electronic device 460, which causes the second electronic device 470 to cease display of the avatar 417 corresponding to the user of the first electronic device 460, as shown in
In some examples, when the first electronic device 460 activates the audio mode in response to the selection of the first user interface object 418A, the first electronic device 460 transmits (e.g., directly or indirectly) an indication of the activation of the audio mode at the first electronic device 460 to the second electronic device 470. For example, as previously discussed above, the user of the first electronic device 460 and the user of the second electronic device 470 are in the first spatial group 440 within the multi-user communication session when the selection input 472A is received (e.g., in
In some examples, while the first electronic device 460 and the second electronic device 470 are in the multi-user communication session, the user of the first electronic device 460 and the user of the second electronic device 470 are associated with a communication session token within the multi-user communication session. For example, a first communication session token may be assigned to the users of the electronic devices 460/470 when the users initially join the multi-user communication session (e.g., the token provides the users access to the shared three-dimensional environment in the multi-user communication session). In some examples, changing states of an electronic device in the multi-user communication session may include assigning a different communication session token to the user of the electronic device. For example, when the first electronic device 460 changes states (e.g., when the audio mode is activated at the first electronic device 460), the user of the first electronic device 460 may be associated with a second communication session token, different from the first communication session token discussed above. In some examples, in accordance with a determination that the user of the first electronic device 460 and the user of the second electronic device 470 are associated with different communication session tokens, the users of the electronic devices 460/470 are associated with different spatial groups within the multi-user communication session, as similarly discussed above.
In some examples, as shown in
As described above with reference to
In some examples, the first spatial group 440 has a spatial arrangement (e.g., spatial template) that is different from a spatial arrangement of the second spatial group 445. For example, the display of the two-dimensional representation 425 within the three-dimensional environment 450A may be independent of the display of the two-dimensional representation 427 within the three-dimensional environment 450B. If the first electronic device 460 receives an input corresponding to movement of the two-dimensional representation 425 corresponding to the user of the second electronic device 470, the first electronic device 460 moves the two-dimensional representation 425 within the three-dimensional environment 450A without causing movement of the two-dimensional representation 427 corresponding to the user of the first electronic device 460 at the second electronic device 470. As another example, if the first electronic device 460 detects a change in a location of the viewpoint of the user of the first electronic device 460 (e.g., due to movement of the first electronic device 460 within the physical environment surrounding the first electronic device 460), the second electronic device 470 does not move the two-dimensional representation 427 corresponding to the user of the first electronic device 460 in the three-dimensional environment 450B. Accordingly, spatial truth becomes localized to the particular spatial group that the users are located within. For example, the user of the first electronic device 460 has spatial truth with the two-dimensional representation 425 in the second spatial group 445 (and any other users that may be within the second spatial group 445), and the user of the second electronic device 470 has spatial truth with the two-dimensional representation 427 in the first spatial group 440 (and any other users that may be within the first spatial group 440).
Additionally, display of content in the three-dimensional environment 450A may be independent of display of content in the three-dimensional environment 450B while the users of the electronic devices 460/470 are in different spatial groups within the multi-user communication session. As an example, in
Further, as similarly discussed above, input directed to the virtual object 430 may only be reflected in the three-dimensional environment 450A. For example, if the first electronic device 460 receives an input corresponding to a request to move the virtual object 430 within the three-dimensional environment 450A, the first electronic device 460 may move the virtual object 430 in accordance with the input. As another example, input received at the second electronic device 470 may not affect the display of the virtual object 430 in the three-dimensional environment 450A at the first electronic device 460. In some examples, if the second electronic device 470 receives input corresponding to a request to display content (e.g., in a virtual object similar to the virtual object 430), the second electronic device 470 displays the content in the three-dimensional environment 450B, without causing the first electronic device 460 to display the content and/or to move the virtual object 430 to make space for the content in the three-dimensional environment 450A. Accordingly, display of content in the three-dimensional environment 450A while the users of the electronic devices 460/470 are in different spatial groups causes the content to only be associated with one spatial group (e.g., such as the second spatial group 445 as discussed above) based on the user who provided the input for causing display of the content.
In
In some examples, in response to receiving the selection input 412B, the first electronic device 460 deactivates the audio mode. For example, the first electronic device 460 transmits (e.g., directly or indirectly) data including a command to the second electronic device 470 to redisplay the avatar 417 corresponding to the user of the first electronic device 460 in the three-dimensional environment 450B at the second electronic device 470. Alternatively, the first electronic device 460 resumes transmitting/broadcasting data for displaying the avatar 417 corresponding to the user of the first electronic device 460 in the three-dimensional environment 450B at the second electronic device 470. In some examples, when the second electronic device 470 receives the data transmitted/broadcasted by the first electronic device 460, the second electronic device 470 determines whether the first electronic device 460 and the second electronic device 470 are operating in the same state once again. For example, because the first electronic device 460 has resumed broadcasting data for displaying the avatar 417 at the second electronic device 470, and the second electronic device 470 was broadcasting data for displaying the avatar 415 corresponding to the user of the second electronic device 470 at the first electronic device 460 when the audio mode was first activated, the electronic devices 460/470 are now broadcasting the same information, and thus, are operating in the same state once again. Additionally or alternatively, in some examples, when the first electronic device 460 deactivates the audio mode, the user of the first electronic device 460 is reassociated with the first communication session token discussed previously above. In some examples, in accordance with a determination that the user of the first electronic device 460 and the user of the second electronic device 470 are associated with the same communication session token, the users of the electronic devices 460/470 are associated with a same spatial group within the multi-user communication session.
Accordingly, as shown in
As discussed above with reference to
It should be understood that, in some examples, the determination of the states of the electronic devices 460/470 may occur at both electronic devices 460/470 (e.g., rather than just at the second electronic device 470 as discussed above by way of example). Additionally, it should be understood that, in some examples, the first electronic device 460 and/or the second electronic device 470 may periodically evaluate the states in which the electronic devices 460/470 are operating (e.g., in the manners discussed above) to determine whether the users of the electronic devices 460/470 should be associated with a same spatial group or different spatial groups (and/or any updates therein).
As described above, while electronic devices are in a multi-user communication session, altering display of an avatar corresponding to one user causes the users of the electronic devices to be grouped into different spatial groups within the multi-user communication session. Attention is now directed to displaying private exclusive content in the three-dimensional environment shared between the first electronic device and the second electronic device. As described below, private exclusive content (e.g., such as immersive video or an immersive three-dimensional scene/environment) that is not shared between the first electronic device and the second electronic device and displayed in the three-dimensional environment optionally causes the users of the electronic devices to be grouped into different (e.g., separate) spatial groups within the multi-user communication session.
As shown in
As previously discussed herein, in some examples, virtual objects (e.g., application windows and user interfaces, representations of content, application icons, and the like) that are viewable by a user may be private while the user is participating in a multi-user communication session with one or more other users (e.g., via electronic devices that are communicatively linked in the multi-user communication session). For example, as discussed above, the user of the second electronic device 570 is optionally viewing the user interface element 530 in three-dimensional environment 550B. In some examples, a representation of the user interface element 530″ is displayed in the three-dimensional environment 550A at the first electronic device 560 with the avatar 515 corresponding to the user of the second electronic device 570. As similarly discussed above, in some examples, the representation of the user interface element 530″ displayed in the three-dimensional environment 550A is optionally an occluded (e.g., a faded or blurred) representation of the user interface element 530 displayed in three-dimensional environment 550B. For example, the user of the first electronic device 560 is prevented from viewing the contents of the user interface element 530 displayed in three-dimensional environment 550B at the second electronic device 570, as shown in
As previously discussed herein, in
In some examples, as previously discussed above, the user of the first electronic device 560 and the user of the second electronic device 570 become associated with (e.g., grouped into) different spatial groups within the multi-user communication session when one of the electronic devices changes states. For example, if one of the electronic device changes states, the electronic device transmits an indication (e.g., directly or indirectly) to the other electronic device(s) in the multi-user communication session indicating that the electronic device has changed states. As described in more detail below, an electronic device in the multi-user communication session changes state when presentation of content in the shared three-dimensional environment changes.
As shown in
As shown in
In some examples, while the first electronic device 560 and the second electronic device 570 are in the multi-user communication session, the first electronic device 560 and the second electronic device 570 are associated with an environment identifier (ID) within the multi-user communication session. For example, a first environment ID may be assigned to the users of the electronic devices 560/570 when the users initially join the multi-user communication session (e.g., the environment ID corresponds to the shared three-dimensional environment in the multi-user communication session). In some examples, changing states of an electronic device in the multi-user communication session may include assigning a different environment ID to the user of the electronic device. For example, when the second electronic device 570 changes states (e.g., when the second electronic device 570 displays the immersive content 552 in
In some examples, as similarly discussed above, the users of the electronic devices 560/570 may be grouped into different spatial groups within the multi-user communication session when the first electronic device 560 and the second electronic device 570 are no longer operating in the same state. For example, as shown in
In some examples, as shown in
As previously described above, the display of avatars 515/517 in three-dimensional environments 550A/550B is optionally accompanied by the presentation of an audio effect corresponding to a voice of each of the users of the electronic devices 570/560, which, in some examples, may be spatialized such that the audio appears to the user of the electronic devices 570/560 to emanate from the locations of avatars 515/517 in three-dimensional environments 550A/550B. In some examples, as shown in
As mentioned previously herein, in some examples, while the users of the electronic devices 560/570 are grouped in different spatial groups within the multi-user communication session, the users experience spatial truth that is localized based on the spatial group each user is located in. For example, as previously discussed above, the display of content (and subsequent interactions with the content) in the three-dimensional environment 550A at the first electronic device 560 may be independent of the display of content in the three-dimensional environment 550B the second electronic device 570. As an example, in
In
In some examples, in response to receiving the selection input 572B directed to the exit option 513 while the first electronic device 560 and the second electronic device 570 are in different spatial groups, the second electronic device 570 ceases display of the immersive content 552 in the three-dimensional environment 550B, as shown in
Therefore, as shown in
As discussed above with reference to
As previously discussed herein, because the application window 535 is not shared between the first electronic device 560 and the second electronic device 570, the application window 535 is currently private to the user of the first electronic device 560 (e.g., but is not exclusive to the user of the first electronic device 560). Accordingly, as shown in
It should be understood that, while the immersive content 552 was described above as being an immersive art gallery, any type of immersive content can be provided. For example, the immersive content may refer to a video game, an immersive environmental rendering (e.g., a three-dimensional representation of a beach or a forest), a computer-generated model (e.g., a three-dimensional mockup of a house designed in a computer graphics application), and the like. Each of these types of immersive content optionally follow the above-described behavior for dictating the grouping of users into spatial groups within the multi-user communication session. In some examples, the immersive content may refer to any content that may be navigated by a user with three or six degrees of freedom.
As described above, while electronic devices are in a multi-user communication session, displaying private exclusive content at one electronic device causes the users of the electronic devices to be grouped into different spatial groups within the multi-user communication session. Attention is now directed to altering display of content that is shared among a first electronic device, a second electronic device, and a third electronic device in a multi-user communication session. As described below, changing a manner in which content (e.g., such as video content displayed in a two-dimensional application window) that is shared among the electronic devices is displayed in the three-dimensional environment optionally causes the users of the electronic devices to be grouped into different (e.g., separate) spatial groups within the multi-user communication session.
As shown in
In some examples, the application window 630 may be a shared virtual object in the shared three-dimensional environment. For example, as shown in
As previously discussed herein, in
In some examples, as previously discussed above, the user of the first electronic device 660 and the user of the second electronic device 670 become associated with (e.g., grouped into) different spatial groups within the multi-user communication session when one of the electronic devices changes states. For example, if one of the electronic device changes states, the electronic device transmits an indication (e.g., directly or indirectly) to the other electronic device(s) in the multi-user communication session indicating that the electronic device has changed states. As described in more detail below, an electronic device in the multi-user communication session changes state when a manner in which shared content is presented in the shared three-dimensional environment changes.
In some examples, the video content of the application window 630 is being displayed in a window mode in the shared three-dimensional environment. For example, the video content displayed in the three-dimensional environment is bounded/limited by a size of the application window 630, as shown in
As shown in
As described herein, the first electronic device 660, the second electronic device 670, and the third electronic device (not shown) are in a multi-user communication session, such that the first electronic device 660, the second electronic device 670, and the third electronic device optionally display the shared three-dimensional environments 650A/650B. Because the first electronic device 660 is now displaying the video content of the application window 630 in the full-screen mode in the three-dimensional environment 650A, as shown in
In some examples, the user of the first electronic device 660 may be grouped into a different spatial group from the user of the second electronic device 670 and the third electronic device within the multi-user communication session when the three electronic devices are no longer operating in the same state. For example, as shown in
As shown in
In some examples, as shown in
As similarly described above, the display of avatars 615/617/619 in three-dimensional environments 650A/650B is optionally accompanied by the presentation of an audio effect corresponding to a voice of each of the users of the three electronic devices, which, in some examples, may be spatialized such that the audio appears to the users of the three electronic devices to emanate from the locations of avatars 615/617/619 in the three-dimensional environments 650A/650B. In some examples, as shown in
In some examples, when the user of the first electronic device 660 is grouped into the second spatial group 649 that is separate from the first spatial group 640, the user of the second electronic device 670 and the user of the third electronic device (not shown) are arranged in a new spatial arrangement (e.g., spatial template) within the first spatial group 640. For example, as discussed above, in
As mentioned previously herein, in some examples, while the users of the three electronic devices are grouped in separate spatial groups within the multi-user communication session, the users experience spatial truth that is localized based on the spatial group each user is located in. For example, as previously discussed above, the display of content (and subsequent interactions with the content) in the three-dimensional environment 650A at the first electronic device 660 may be independent of the display of content in the three-dimensional environment 650B at the second electronic device 670, though the content of the application window(s) may still be synchronized (e.g., the same portion of video content (e.g., movie or television show content) is being played back in the application window(s) across the first electronic device 660 and the second electronic device 670). As an example, if the first electronic device 660 detects a scrubbing input (or similar input) provided by the user of the first electronic device 660 directed to the application window 630 in the three-dimensional environment 650A that causes the playback position within the video content to change (e.g., rewind, fast-forward, pause, etc.), the second electronic device 670 would also update the playback position within the video content in the application window 630 in the three-dimensional environment 650B to maintain synchronization of the playback of the video content.
As an example, as shown in
In some examples, in response to receiving the selection input 672B followed by the movement input 674, the second electronic device 670 moves the application window 630 in accordance with the movement input 674. For example, as shown in
It should be understood that additional or alternative interactions with virtual objects in the shared three-dimensional environment are localized to the spatial group in which the particular users in the multi-user communication session are located. For example, similar to the example provided in
In some examples, a respective spatial group that is associated with the multi-user communication session may be associated with a local driver (e.g., a particular electronic device associated with a user in the multi-user communication session) that is configured to control one or more aspects of the display of shared content within the respective spatial group. For example, the local driver controls a location at which shared content is displayed in the three-dimensional environment, a size at which the shared content is displayed in the three-dimensional environment, and/or an orientation with which the shared content is displayed in the three-dimensional environment. Accordingly, in some examples, if the local driver causes one or more aspects of the display of the shared content (e.g., location, size, orientation, etc.) to change in a particular spatial group, the display of the shared content will be synchronized for other users who are also in the spatial group, such that changes in the one or more aspects of the display of the shared content are reflected in the respective three-dimensional environments of the other users (e.g., as similarly described above).
In some examples, the local driver corresponds to the user in the multi-user communication session who initially shared the content with the other users in the multi-user communication session within the same spatial group (e.g., such that the content becomes viewable to the other users, such as application window 630 in
In some examples, the local driver is updated based on input(s) that cause an orientation mode of the application window 630 to change in the shared three-dimensional environment. For example, as shown in
In
In some examples, as shown in
In some examples, when the orientation mode of the application window 630 is updated in response to the selection of the orientation affordance 646 in
As discussed previously above, in some examples, interactions directed to the application window 630 that cause one or more aspects of the display of the application window 630 to be updated (e.g., other than the orientation mode as discussed above) are permitted only for the local driver of the spatial group 640. Accordingly, in the example of
In some examples, as shown in
In some examples, it may be advantageous to facilitate user input for maintaining the users in the multi-user communication session within the same spatial group. For example, the display of avatars of users of electronic devices in a shared three-dimensional environment while the electronic devices are in the multi-user communication session enables the users to experience content in the shared three-dimensional environment with an indication of the presence of the other users, which enhances the users' collective shared experience and interactions. As discussed herein above, changing states of one of the electronic devices in the multi-user communication session causes the user of the electronic device to be grouped into a separate spatial group in the multi-user communication session, which breaks continuity of experiencing the same content. Accordingly, in some examples, when shared content (e.g., shared application window 630) is displayed in the shared three-dimensional environment, if one of the electronic devices changes states (e.g., due to the display of the shared content in the full-screen mode as discussed above), the electronic device transmits an indication to the other electronic devices that prompts user input for synchronizing the display of the shared content.
As an example, as shown in
As shown in
In some examples, in response to detecting the selection input 672C, the second electronic device 670 optionally presents the video content of the application window 630 in the full-screen mode in the three-dimensional environment 650B, as shown in
In some examples, when the second electronic device 670 displays the video content of the application window 630 in the full-screen mode in the three-dimensional environment 650B, the user of the second electronic device 670 joins the user of the first electronic device 660 in the second spatial group 649, as shown in
Additionally, in some examples, as previously described herein, when the user of the second electronic device 670 joins the user of the first electronic device 660 in the second spatial group 649 as shown in
In some examples, rather than display a notification (e.g., such as notification element 620) corresponding to an invitation from the first electronic device 660 to join in viewing the video content of the application window 630 in the full-screen mode as discussed above with reference to
In some examples, as similarly described above, when the second electronic device 670 and the third electronic device (not shown) join the first electronic device 660 in viewing the video content in the full-screen mode as shown in
Additionally, in some examples, as previously described herein, when the user of the second electronic device 670 and the user of the third electronic device (not shown) join the user of the first electronic device 660 in the second spatial group 649 as shown in
It is understood that the examples shown and described herein are merely exemplary and that additional and/or alternative elements may be provided within the three-dimensional environment for interacting with the illustrative content. It should be understood that the appearance, shape, form and size of each of the various user interface elements and objects shown and described herein are exemplary and that alternative appearances, shapes, forms and/or sizes may be provided. For example, the virtual objects representative of application windows (e.g., virtual objects 330, 430, 535 and 630) may be provided in an alternative shape than a rectangular shape, such as a circular shape, triangular shape, etc. In some examples, the various selectable options (e.g., the option 523A, the options 511 and 513, the option 626, and/or the options 621 and 622), user interface elements (e.g., user interface element 516 or user interface element 620), control elements (e.g., playback controls 556 or 656), etc. described herein may be selected verbally via user verbal commands (e.g., “select option” verbal command). Additionally or alternatively, in some examples, the various options, user interface elements, control elements, etc. described herein may be selected and/or manipulated via user input received via one or more separate input devices in communication with the electronic device(s). For example, selection input may be received via physical input devices, such as a mouse, trackpad, keyboard, etc. in communication with the electronic device(s).
As shown in
Additionally, in some examples, at 712, the first electronic device replaces display of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device. For example, as shown in
It is understood that process 700 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 700 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
In some examples, at 804, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device, the first electronic device receives an indication corresponding to a change in a state of the second electronic device. For example, as described with reference to
In some examples, at 806, in response to receiving the indication, at 808, in accordance with a determination that the state of the second electronic device is a first state, the first electronic device replaces display, via the display, of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device in the computer-generated environment. For example, as shown in
In some examples, at 810, in accordance with a determination that the state of the second electronic device is a second state, different from the first state, the first electronic device maintains display, via the display, of the avatar corresponding to the user of the second electronic device in the computer-generated environment. For example, as shown in
It is understood that process 800 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 800 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method comprising, at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, displaying, via the display, a computer-generated environment including an avatar corresponding to a user of the second electronic device; while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device, receiving, via the one or more input devices, a first input corresponding to a request to display content in the computer-generated environment; and in response to receiving the first input, in accordance with a determination that the content is a first type of content, displaying, via the display, a first object corresponding to the content in the computer-generated environment and replacing display of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, and in accordance with a determination that the content is a second type of content, different from the first type of content, concurrently displaying, via the display, the first object corresponding to the content and the avatar corresponding to the user of the second electronic device in the computer-generated environment.
Additionally or alternatively, in some examples, the first electronic device and the second electronic device are a head-mounted display, respectively. Additionally or alternatively, in some examples, the first type of content includes content that is private to a user of the first electronic device. Additionally or alternatively, in some examples, the first object is a private application window associated with an application operating on the first electronic device. Additionally or alternatively, in some examples, the first object is a three-dimensional immersive environment. Additionally or alternatively, in some examples, the second type of content includes content that is shared between a user of the first electronic device and the user of the second electronic device. Additionally or alternatively, in some examples, the first object is a shared application window associated with an application operating on the first electronic device. Additionally or alternatively, in some examples, before receiving the first input, a user of the first electronic device and the user of the second electronic device are in a first spatial group within the communication session and while in the first spatial group, content is displayed at a predetermined location relative to a location of the avatar corresponding to the second electronic device and a location of a viewpoint of the user of the first electronic device in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with the determination that the content is the first type of content, associating the user of the first electronic device with a second spatial group, separate from the first spatial group, within the communication session and displaying the first object corresponding to the content at a second predetermined location, different from the predetermined location, relative to the location of the viewpoint of the user of the first electronic device in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with the determination that the content is the second type of content, maintaining the user of the first electronic device and the user of the second electronic device in the first spatial group within the communication session and displaying the first object corresponding to the content at the predetermined location in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first object corresponding to the content of the second type, receiving, via the one or more input devices, a second input corresponding to a request to display the content in a full screen mode in the computer-generated environment and in response to receiving the second input, displaying, via the display, the content in the full screen mode in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the second input, replacing display of the avatar corresponding to the user of the second electronic device with the two-dimensional representation of the user of the second electronic device. Additionally or alternatively, in some examples, the avatar corresponding to the user of the second electronic device is displayed at a first location in the computer-generated environment before the second input is received and the two-dimensional representation of the user of the second electronic device is displayed in a predetermined region of the display that is separate from the first location in the computer-generated environment in response to receiving the second input. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device, receiving, via the one or more input devices, a second input corresponding to a request to activate an audio output mode and, in response to receiving the second input, replacing display of the avatar corresponding to the user of the second electronic device with the two-dimensional representation of the user of the second electronic device. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the second input, presenting audio corresponding to a voice of the user of the second electronic device.
Additionally or alternatively, in some examples, the avatar corresponding to the user of the second electronic device is displayed at a first location in the computer-generated environment before the second input is received and the two-dimensional representation of the user of the second electronic device is displayed at the first location in response to receiving the second input. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the two-dimensional representation of the user of the second electronic device and the first object corresponding to the content of the first type, receiving, via the one or more input devices, a second input corresponding to a request to move the first object in the computer-generated environment and, in response to receiving the second input, moving the first object corresponding to the content within the computer-generated environment in accordance with the second input, without moving the two-dimensional representation of the user of the second electronic device. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the two-dimensional representation of the user of the second electronic device and the first object corresponding to the content of the first type, receiving, via the one or more input devices, a second input corresponding to a request to cease display of the content in the computer-generated environment and, in response to receiving the second input, ceasing display of the first object corresponding to the content in the computer-generated environment and replacing display of the two-dimensional representation of the user of the second electronic device with the avatar corresponding to the user of the second electronic device.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with the determination that the content is the first type of content, transmitting, to the second electronic device, an indication of the display of the first object corresponding to the content in the computer-generated environment. Additionally or alternatively, in some examples, the avatar corresponding to the user of the second electronic device is displayed at a first location in the computer-generated environment before the first input is received and, in accordance with the determination that the content is the first type of content, the two-dimensional representation of the user of the second electronic device is displayed at the first location in response to receiving the first input. Additionally or alternatively, in some examples, the first electronic device is further in communication with a third electronic device and the computer-generated environment further includes a respective shared object before the first input is received. In some examples, the method further comprises, in response to receiving the first input, in accordance with the determination that the content is the first type of content, ceasing display of the respective shared object in the computer-generated environment and, in accordance with the determination that the content is the second type of content, concurrently displaying the first object corresponding to the content, the respective shared object, and the avatar corresponding to the user of the second electronic device in the computer-generated environment.
Some examples of the disclosure are directed to a method comprising, at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, displaying, via the display, a computer-generated environment including an avatar corresponding to a user of the second electronic device; while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device, receiving an indication corresponding to a change in a state of the second electronic device; and in response to receiving the indication, in accordance with a determination that the state of the second electronic device is a first state, replacing display, via the display, of the avatar corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device in the computer-generated environment and, in accordance with a determination that the state of the second electronic device is a second state, different from the first state, maintaining display, via the display, of the avatar corresponding to the user of the second electronic device in the computer-generated environment.
Additionally or alternatively, in some examples, the first electronic device and the second electronic device are a head-mounted display, respectively. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the indication, in accordance with the determination that the state of the second electronic device is the first state, presenting audio corresponding to a voice of the user of the second electronic device. Additionally or alternatively, in some examples, the avatar corresponding to the user of the second electronic device is displayed at a first location in the computer-generated environment before the indication is received and, in response to receiving the indication, in accordance with the determination that the state of the second electronic device is the first state, the two-dimensional representation of the user of the second electronic device is displayed at the first location. Additionally or alternatively, in some examples, the computer-generated environment further includes a first shared object before the indication is received. In some examples, the method further comprises, in response to receiving the indication, in accordance with the determination that the state of the second electronic device is a third state, different from the first state and the second state, maintaining display, via the display, of the first shared object in the computer-generated environment and replacing display of the avatar corresponding to the user of the second electronic device with the two-dimensional representation of the user of the second electronic device in the computer-generated environment, wherein the two-dimensional representation of the user is displayed adjacent to the first shared object in the computer-generated environment.
Additionally or alternatively, in some examples, receiving the indication corresponding to the change in the state of the second electronic device includes receiving data corresponding to a change in the display of the avatar corresponding to the user of the second electronic device in the computer-generated environment or receiving data corresponding to presentation of audio corresponding to a voice of the user of the second electronic device. Additionally or alternatively, in some examples, receiving the indication corresponding to the change in the state of the second electronic device includes receiving data corresponding to a change in display of content that is private to the user of the second electronic device in a computer-generated environment displayed at the second electronic device. Additionally or alternatively, in some examples, receiving the indication corresponding to the change in the state of the second electronic device includes receiving data corresponding to a request to display content that is shared by the user of the second electronic device in the computer-generated environment. Additionally or alternatively, in some examples, the computer-generated environment further includes a first shared object before the indication is received, and receiving the indication corresponding to the change in the state of the second electronic device includes receiving data corresponding to a change in display of the first shared object in a computer-generated environment displayed at the second electronic device.
Additionally or alternatively, in some examples, the determination that the state of the second electronic device is the first state is in accordance with a determination that a user of the first electronic device and the user of the second electronic device are in different spatial groups within the communication session and the determination that the state of the second electronic device is the second state is in accordance with a determination that the user of the first electronic device and the user of the second electronic device are in a same spatial group within the communication session. Additionally or alternatively, in some examples, the determination that the user of the first electronic device and the user of the second electronic device are in different spatial groups within the communication session is in accordance with a determination that the first electronic device and the second electronic device have different communication session tokens and the determination that the user of the first electronic device and the user of the second electronic device are in the same spatial group within the communication session is in accordance with a determination that the first electronic device and the second electronic device share a same communication session token. Additionally or alternatively, in some examples, the determination that the user of the first electronic device and the user of the second electronic device are in different spatial groups within the communication session is in accordance with a determination that the first electronic device and the second electronic device are associated with different environment identifiers and the determination that the user of the first electronic device and the user of the second electronic device are in the same spatial group within the communication session is in accordance with a determination that the first electronic device and the second electronic device are associated with a same environment identifier.
Additionally or alternatively, in some examples, the determination that the user of the first electronic device and the user of the second electronic device are in different spatial groups within the communication session is in accordance with a determination that the first electronic device and the second electronic device are displaying a respective object in different manners and the determination that the user of the first electronic device and the user of the second electronic device are in the same spatial group within the communication session is in accordance with a determination that the first electronic device and the second electronic device are displaying the respective object in a same manner.
Some examples of the disclosure are directed to an electronic device comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising: one or more processors; memory; and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described examples with various modifications as are suited to the particular use contemplated.
This application claims priority to U.S. Provisional Application No. 63/375,956, filed Sep. 16, 2022, and U.S. Provisional Application No. 63/514,505, filed Jul. 19, 2023, the contents of which are herein incorporated by reference in their entireties for all purposes
Number | Name | Date | Kind |
---|---|---|---|
5515488 | Hoppe et al. | May 1996 | A |
5524195 | Clanton et al. | Jun 1996 | A |
5610828 | Kodosky et al. | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn et al. | May 1998 | A |
5758122 | Corda et al. | May 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5877766 | Bates et al. | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy et al. | Nov 1999 | A |
6061060 | Berry et al. | May 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas et al. | Aug 2000 | A |
6323846 | Westerman et al. | Nov 2001 | B1 |
6456296 | Cataudella et al. | Sep 2002 | B1 |
6570557 | Westerman et al. | May 2003 | B1 |
6584465 | Zhu et al. | Jun 2003 | B1 |
6677932 | Westerman | Jan 2004 | B1 |
6756997 | Ward et al. | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7137074 | Newton et al. | Nov 2006 | B1 |
7230629 | Reynolds et al. | Jun 2007 | B2 |
7614008 | Ording | Nov 2009 | B2 |
7633076 | Huppi et al. | Dec 2009 | B2 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7657849 | Chaudhri et al. | Feb 2010 | B2 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7844914 | Andre et al. | Nov 2010 | B2 |
7957762 | Herz et al. | Jun 2011 | B2 |
8006002 | Kalayjian et al. | Aug 2011 | B2 |
8239784 | Hotelling et al. | Aug 2012 | B2 |
8279180 | Hotelling et al. | Oct 2012 | B2 |
8381135 | Hotelling et al. | Feb 2013 | B2 |
8479122 | Hotelling et al. | Jul 2013 | B2 |
8793620 | Stafford | Jul 2014 | B2 |
8803873 | Yoo et al. | Aug 2014 | B2 |
8947323 | Raffle et al. | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim et al. | Mar 2015 | B2 |
8994718 | Latta et al. | Mar 2015 | B2 |
9185062 | Yang et al. | Nov 2015 | B1 |
9348458 | Hotelling et al. | May 2016 | B2 |
9400559 | Latta et al. | Jul 2016 | B2 |
9448635 | Macdougall et al. | Sep 2016 | B2 |
9465479 | Cho et al. | Oct 2016 | B2 |
9563331 | Poulos et al. | Feb 2017 | B2 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun et al. | Jun 2017 | B2 |
9778814 | Ambrus et al. | Oct 2017 | B2 |
9851866 | Goossens et al. | Dec 2017 | B2 |
9886087 | Wald et al. | Feb 2018 | B1 |
9933833 | Tu et al. | Apr 2018 | B2 |
9933937 | Lemay et al. | Apr 2018 | B2 |
9934614 | Ramsby et al. | Apr 2018 | B2 |
10049460 | Romano et al. | Aug 2018 | B2 |
10203764 | Katz et al. | Feb 2019 | B2 |
10307671 | Barney et al. | Jun 2019 | B2 |
10353532 | Holz et al. | Jul 2019 | B1 |
10534439 | Raffa et al. | Jan 2020 | B2 |
10664048 | Cieplinski et al. | May 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall et al. | Aug 2020 | B2 |
10768693 | Powderly et al. | Sep 2020 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell et al. | Jul 2021 | B1 |
11200742 | Post et al. | Dec 2021 | B1 |
11294475 | Pinchon et al. | Apr 2022 | B1 |
11348300 | Zimmerman et al. | May 2022 | B2 |
20010047250 | Schuller et al. | Nov 2001 | A1 |
20020015024 | Westerman et al. | Feb 2002 | A1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20030151611 | Turpin et al. | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040243926 | Trenbeath et al. | Dec 2004 | A1 |
20050100210 | Rice et al. | May 2005 | A1 |
20050138572 | Good et al. | Jun 2005 | A1 |
20050144570 | Loverin et al. | Jun 2005 | A1 |
20050144571 | Loverin et al. | Jun 2005 | A1 |
20050190059 | Wehrenberg | Sep 2005 | A1 |
20050198143 | Moody et al. | Sep 2005 | A1 |
20060017692 | Wehrenberg et al. | Jan 2006 | A1 |
20060033724 | Chaudhri et al. | Feb 2006 | A1 |
20060080702 | Diez et al. | Apr 2006 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20060283214 | Donadon et al. | Dec 2006 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata et al. | Mar 2009 | A1 |
20090231356 | Barnes et al. | Sep 2009 | A1 |
20100097375 | Tadaishi | Apr 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100188503 | Tsai et al. | Jul 2010 | A1 |
20110018895 | Buzyn et al. | Jan 2011 | A1 |
20110018896 | Buzyn et al. | Jan 2011 | A1 |
20110216060 | Weising et al. | Sep 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110310001 | Madau et al. | Dec 2011 | A1 |
20120113223 | Hilliges | May 2012 | A1 |
20120151416 | Bell et al. | Jun 2012 | A1 |
20120170840 | Caruso et al. | Jul 2012 | A1 |
20120218395 | Andersen et al. | Aug 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130229345 | Day et al. | Sep 2013 | A1 |
20130271397 | Hildreth et al. | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130286004 | Mcculloch et al. | Oct 2013 | A1 |
20130326364 | Latta et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342570 | Kinnebrew et al. | Dec 2013 | A1 |
20140002338 | Raffa et al. | Jan 2014 | A1 |
20140028548 | Bychkov et al. | Jan 2014 | A1 |
20140075361 | Reynolds et al. | Mar 2014 | A1 |
20140108942 | Freeman et al. | Apr 2014 | A1 |
20140125584 | Xun et al. | May 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140282272 | Kies et al. | Sep 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane et al. | Nov 2014 | A1 |
20150035822 | Arsan et al. | Feb 2015 | A1 |
20150035832 | Sugden | Feb 2015 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150123890 | Kapur et al. | May 2015 | A1 |
20150177937 | Poletto et al. | Jun 2015 | A1 |
20150187093 | Chu | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150227285 | Lee et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150317832 | Ebstyne et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20150332091 | Kim et al. | Nov 2015 | A1 |
20150370323 | Cieplinski et al. | Dec 2015 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160062636 | Jung et al. | Mar 2016 | A1 |
20160093108 | Mao et al. | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160171304 | Golding et al. | Jun 2016 | A1 |
20160196692 | Kjallstrom et al. | Jul 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano et al. | Sep 2016 | A1 |
20160275702 | Reynolds et al. | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160313890 | Walline et al. | Oct 2016 | A1 |
20160379409 | Gavriliuc et al. | Dec 2016 | A1 |
20170038829 | Lanier et al. | Feb 2017 | A1 |
20170038837 | Faaborg et al. | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier et al. | Feb 2017 | A1 |
20170046872 | Geselowitz et al. | Feb 2017 | A1 |
20170060230 | Faaborg et al. | Mar 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170131964 | Baek et al. | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke et al. | May 2017 | A1 |
20170153866 | Grinberg et al. | Jun 2017 | A1 |
20170206691 | Harrises et al. | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170285737 | Khalid et al. | Oct 2017 | A1 |
20170315715 | Fujita et al. | Nov 2017 | A1 |
20170344223 | Holzer et al. | Nov 2017 | A1 |
20170358141 | Stafford et al. | Dec 2017 | A1 |
20180045963 | Hoover et al. | Feb 2018 | A1 |
20180075658 | Lanier et al. | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia et al. | Apr 2018 | A1 |
20180101223 | Ishihara et al. | Apr 2018 | A1 |
20180114364 | Mcphee et al. | Apr 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey et al. | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180210628 | McPhee et al. | Jul 2018 | A1 |
20180239144 | Woods | Aug 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov et al. | Nov 2018 | A1 |
20180348861 | Uscinski et al. | Dec 2018 | A1 |
20190034076 | Vinayak et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190080572 | Kim et al. | Mar 2019 | A1 |
20190088149 | Fink et al. | Mar 2019 | A1 |
20190094979 | Hall et al. | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad et al. | May 2019 | A1 |
20190130733 | Hodge | May 2019 | A1 |
20190146128 | Cao et al. | May 2019 | A1 |
20190204906 | Ross et al. | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190258365 | Zurmoehle et al. | Aug 2019 | A1 |
20190310757 | Lee et al. | Oct 2019 | A1 |
20190324529 | Stellmach | Oct 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20190371072 | Lindberg et al. | Dec 2019 | A1 |
20190377487 | Bailey et al. | Dec 2019 | A1 |
20190379765 | Fajt et al. | Dec 2019 | A1 |
20190384406 | Smith et al. | Dec 2019 | A1 |
20200004401 | Hwang et al. | Jan 2020 | A1 |
20200043243 | Bhushan et al. | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev et al. | Mar 2020 | A1 |
20200098140 | Jagnow et al. | Mar 2020 | A1 |
20200098173 | Mccall | Mar 2020 | A1 |
20200117213 | Tian et al. | Apr 2020 | A1 |
20200159017 | Lin et al. | May 2020 | A1 |
20200225747 | Bar-Zeev et al. | Jul 2020 | A1 |
20200225830 | Tang | Jul 2020 | A1 |
20200226814 | Tang et al. | Jul 2020 | A1 |
20200356221 | Behzadi et al. | Nov 2020 | A1 |
20200357374 | Verweij et al. | Nov 2020 | A1 |
20200387228 | Ravasz et al. | Dec 2020 | A1 |
20210074062 | Madonna et al. | Mar 2021 | A1 |
20210090337 | Ravasz et al. | Mar 2021 | A1 |
20210096726 | Faulkner et al. | Apr 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210295602 | Scapel et al. | Sep 2021 | A1 |
20210303074 | Vanblon et al. | Sep 2021 | A1 |
20210319617 | Ahn | Oct 2021 | A1 |
20210327140 | Rothkopf et al. | Oct 2021 | A1 |
20210339134 | Knoppert | Nov 2021 | A1 |
20210350564 | Peuhkurinen et al. | Nov 2021 | A1 |
20210375022 | Lee | Dec 2021 | A1 |
20220011855 | Hazra et al. | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220083197 | Rockel | Mar 2022 | A1 |
20220092862 | Faulkner et al. | Mar 2022 | A1 |
20220101593 | Rockel et al. | Mar 2022 | A1 |
20220101612 | Palangie | Mar 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220137705 | Hashimoto et al. | May 2022 | A1 |
20220155909 | Kawashima | May 2022 | A1 |
20220157083 | Jandhyala et al. | May 2022 | A1 |
20220191570 | Reid et al. | Jun 2022 | A1 |
20220229524 | Mckenzie et al. | Jul 2022 | A1 |
20220229534 | Terre et al. | Jul 2022 | A1 |
20220245888 | Singh et al. | Aug 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20230008537 | Henderson et al. | Jan 2023 | A1 |
20230068660 | Brent et al. | Mar 2023 | A1 |
20230074080 | Miller et al. | Mar 2023 | A1 |
20230093979 | Stauber et al. | Mar 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230154122 | Dascola et al. | May 2023 | A1 |
20230163987 | Young | May 2023 | A1 |
20230244857 | Weiss | Aug 2023 | A1 |
20230273706 | Smith et al. | Aug 2023 | A1 |
20230274504 | Ren et al. | Aug 2023 | A1 |
20230315385 | Akmal et al. | Oct 2023 | A1 |
20230316634 | Chiu et al. | Oct 2023 | A1 |
20230325004 | Burns et al. | Oct 2023 | A1 |
20230350539 | Owen et al. | Nov 2023 | A1 |
20230384907 | Boesel et al. | Nov 2023 | A1 |
20240086031 | Palangie et al. | Mar 2024 | A1 |
20240086032 | Palangie et al. | Mar 2024 | A1 |
20240087256 | Hylak et al. | Mar 2024 | A1 |
20240094863 | Smith et al. | Mar 2024 | A1 |
20240103684 | Yu et al. | Mar 2024 | A1 |
20240103707 | Henderson et al. | Mar 2024 | A1 |
20240104836 | Dessero et al. | Mar 2024 | A1 |
20240104877 | Henderson et al. | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110673718 | Jan 2020 | CN |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3503101 | Jun 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H10-51711 | Feb 1998 | JP |
2005-215144 | Aug 2005 | JP |
2012-234550 | Nov 2012 | JP |
2013-196158 | Sep 2013 | JP |
2013-257716 | Dec 2013 | JP |
2014-514652 | Jun 2014 | JP |
2015-515040 | May 2015 | JP |
2015-118332 | Jun 2015 | JP |
2016-194744 | Nov 2016 | JP |
2017-27206 | Feb 2017 | JP |
2018-5516 | Jan 2018 | JP |
2019-169154 | Oct 2019 | JP |
2022-53334 | Apr 2022 | JP |
10-2016-0012139 | Feb 2016 | KR |
10-2019-0100957 | Aug 2019 | KR |
2013169849 | Nov 2013 | WO |
2014105276 | Jul 2014 | WO |
2019142560 | Jul 2019 | WO |
2020066682 | Apr 2020 | WO |
2021202783 | Oct 2021 | WO |
2022066399 | Mar 2022 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
AquaSnap Window Manager: dock, snap, tile, organize [online], Nurgo Software, Available online at: < https://www.nurgo-software.com/products/aquasnap>, [retrieved on Jun. 27, 2023], 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Dec. 21, 2022, 2 pages. |
Extended European Search Report received for European Patent Application No. 23158818.7, mailed on Jul. 3, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23158929.2, mailed on Jun. 27, 2023, 12 pages. |
Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Mar. 16, 2023, 24 pages. |
Home | Virtual Desktop [online], Virtual Desktop, Available online at: < https://www.vrdesktop.net>, [retrieved on Jun. 29, 2023 ], 4 pages. |
International Search Report received for PCT Application No. PCT/US2022/076603, mailed on Jan. 9, 2023, 4 pages. |
International Search Report received for PCT Application No. PCT/US2023/017335, mailed on Aug. 22, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/018213, mailed on Jul. 26, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/019458, mailed on Aug. 8, 2023, 7 pages. |
International Search Report received for PCT Application No. PCT/US2023/060943, mailed on Jun. 6, 2023, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/049131, mailed on Dec. 21, 2021, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/050948, mailed on Mar. 4, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/065240, mailed on May 23, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071595, mailed on Mar. 17, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/013208, mailed on Apr. 26, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/065242, mailed on Apr. 4, 2022, 3 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Oct. 6, 2022, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Sep. 29, 2023, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Dec. 11, 2023, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 26, 2023, 29 pages. |
Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Aug. 31, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Jun. 6, 2023, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Nov. 30, 2022, 12 pages. |
Restriction Requirement received for U.S. Appl. No. 17/932,999, mailed on Oct. 3, 2023, 6 pages. |
Bhowmick, Shimmila, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments”, Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, Nov. 2018, 25 pages. |
Bolt et al., “Two-Handed Gesture in Multi-Modal Natural Dialog”, Uist '92, 5th Annual Symposium on User Interface Software And Technology. Proceedings Of the ACM Symposium on User Interface Software And Technology, Monterey, Nov. 15-18, 1992, pp. 7-14. |
Brennan, Dominic, “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared”, [online]. Road to VR, Available online at: < https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/>, [retrieved on Jun. 29, 2023], Jan. 3, 2018, 4 pages. |
Camalich, Sergio, “CSS Buttons with Pseudo-elements”, Available online at: < https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/>, [retrieved on Jul. 12, 2017], Jan. 11, 2012, 8 pages. |
Chatterjee et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions”, ICMI '15, Nov. 9-13, 2015, 8 pages. |
Lin et al., “Towards Naturally Grabbing and Moving Objects in VR”, IS&T International Symposium on Electronic Imaging and The Engineering Reality of Virtual Reality, 2016, 6 pages. |
McGill et al., “Expanding The Bounds Of Seated Virtual Workspaces”, University of Glasgow, Available online at: <https://core.ac.uk/download/pdf/323988271.pdf>, [retrieved on Jun. 27, 2023], Jun. 5, 2020, 44 pages. |
Pfeuffer et al., “Gaze + Pinch Interaction in Virtual Reality”, In Proceedings of SUI '17, Brighton, United Kingdom, Oct. 16-17, 2017, pp. 99-108. |
Extended European Search Report received for European Patent Application No. 23197572.3, mailed on Feb. 19, 2024, 7 pages. |
Final Office Action received for U.S. Appl. No. 17/580,495, mailed on May 13, 2024, 29 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/157,040, mailed on May 2, 2024, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on May 29, 2024, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/305,201, mailed on May 23, 2024, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on May 29, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on May 10, 2024, 12 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, mailed on Apr. 24, 2024, 4 pages. |
Corrected Notice of Allowability received for U.S. Patent Application No. 17/659, 147, mailed on Feb. 14, 2024, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, mailed on Oct. 12, 2023, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/465,098, mailed on Mar. 13, 2024, 3 pages. |
European Search Report received for European Patent Application No. 21791153.6, mailed on Mar. 22, 2024, 5 pages. |
Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Oct. 4, 2023, 17 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Feb. 16, 2024, 32 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071596, mailed on Apr. 8, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/071704, mailed on Aug. 26, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074257, mailed on Nov. 21, 2023, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074950, mailed on Jan. 3, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074979, mailed on Feb. 26, 2024, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Mar. 16, 2023, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,655, mailed on Apr. 20, 2023, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,999, mailed on Feb. 23, 2024, 22 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Apr. 17, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Apr. 7, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Jul. 20, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on Jan. 26, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Jan. 24, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Sep. 29, 2023, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Apr. 11, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Mar. 4, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Nov. 17, 2023, 8 pages. |
Search Report received for Chinese Patent Application No. 202310873465.7, mailed on Feb. 1, 2024, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Number | Date | Country | |
---|---|---|---|
20240095984 A1 | Mar 2024 | US |
Number | Date | Country | |
---|---|---|---|
63514505 | Jul 2023 | US | |
63375956 | Sep 2022 | US |