This relates generally to systems and methods of establishing multi-user communication sessions in which at least a subset of participants within the multi-user communication sessions is collocated in a physical environment.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some examples, the three-dimensional environments are presented by multiple devices communicating in a multi-user communication session. In some examples, an avatar (e.g., a representation) of each non-collocated user participating in the multi-user communication session (e.g., via the computing devices) is displayed in the three-dimensional environment of the multi-user communication session. In some examples, content can be shared in the three-dimensional environment for viewing and interaction by multiple users participating in the multi-user communication session.
Some examples of the disclosure are directed to systems and methods for determining a placement location for an avatar corresponding to a remote user within a multi-user communication session that includes a group of collocated users when initiating the multi-user communication session. In some examples, a method is performed at a first electronic device in communication with one or more displays and one or more input devices, wherein the first electronic device is collocated with a second electronic device in a first physical environment. In some examples, the first electronic device detects an indication of a request to enter a communication session with a third electronic device, wherein the third electronic device is non-collocated in the first physical environment. In some examples, in response to detecting the indication, the first electronic device enters the communication session that includes the first electronic device, the second electronic device, and the third electronic device. In some examples, the first electronic device obtains first data corresponding to a location of a user of the second electronic device relative to a viewpoint of the first electronic device in the first physical environment. In some examples, the first electronic device obtains second data corresponding to an orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment. In some examples, the first electronic device displays, via the one or more displays, a visual representation of a user of the third electronic device at a second location in a computer-generated environment based on the first data and the second data.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for determining a placement location for an avatar corresponding to a remote user within a multi-user communication session that includes a group of collocated users when initiating the multi-user communication session. In some examples, a method is performed at a first electronic device in communication with one or more displays and one or more input devices, wherein the first electronic device is collocated with a second electronic device in a first physical environment. In some examples, the first electronic device detects an indication of a request to enter a communication session with a third electronic device, wherein the third electronic device is non-collocated in the first physical environment. In some examples, in response to detecting the indication, the first electronic device enters the communication session that includes the first electronic device, the second electronic device, and the third electronic device. In some examples, the first electronic device obtains first data corresponding to a location of a user of the second electronic device relative to a viewpoint of the first electronic device in the first physical environment. In some examples, the first electronic device obtains second data corresponding to an orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment. In some examples, the first electronic device displays, via the one or more displays, a visual representation of a user of the third electronic device at a second location in a computer-generated environment based on the first data and the second data.
As used herein, a spatial group corresponds to a group or number of participants (e.g., users) in a multi-user communication session. In some examples, a spatial group in the multi-user communication session has a spatial arrangement that dictates locations of users and content that are located in the spatial group. In some examples, users in the same spatial group within the multi-user communication session experience spatial truth according to the spatial arrangement of the spatial group. In some examples, when the user of the first electronic device is in a first spatial group and the user of the second electronic device is in a second spatial group in the multi-user communication session, the users experience spatial truth that is localized to their respective spatial groups. In some examples, while the user of the first electronic device and the user of the second electronic device are grouped into separate spatial groups within the multi-user communication session, if the first electronic device and the second electronic device return to the same operating state, the user of the first electronic device and the user of the second electronic device are regrouped into the same spatial group within the multi-user communication session.
As used herein, a hybrid spatial group corresponds to a group or number of participants (e.g., users) in a multi-user communication session in which at least a subset of the participants is non-collocated in a physical environment. For example, as described via one or more examples in this disclosure, a hybrid spatial group includes at least two participants who are collocated in a first physical environment and at least one participant who is non-collocated with the at least two participants in the first physical environment (e.g., the at least one participant is located in a second physical environment, different from the first physical environment). In some examples, a hybrid spatial group in the multi-user communication session has a spatial arrangement that dictates locations of users and content that are located in the spatial group. In some examples, users in the same hybrid spatial group within the multi-user communication session experience spatial truth according to the spatial arrangement of the spatial group, as similarly discussed above.
In some examples, initiating a multi-user communication session may include interaction with one or more user interface elements. In some examples, a user's gaze may be tracked by an electronic device as an input for targeting a selectable option/affordance within a respective user interface element that is displayed in the three-dimensional environment. For example, gaze can be used to identify one or more options/affordances targeted for selection using another selection input. In some examples, a respective option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
In some examples, as shown in
In some examples, display 120 has a field of view visible to the user (e.g., that may or may not correspond to a field of view of external image sensors 114b and 114c). Because display 120 is optionally part of a head-mounted device, the field of view of display 120 is optionally the same as or similar to the field of view of the user's eyes. In other examples, the field of view of display 120 may be smaller than the field of view of the user's eyes. In some examples, electronic device 101 may be an optical see-through device in which display 120 is a transparent or translucent display through which portions of the physical environment may be directly viewed. In some examples, display 120 may be included within a transparent lens and may overlap all or only a portion of the transparent lens. In other examples, electronic device may be a video-passthrough device in which display 120 is an opaque display configured to display images of the physical environment captured by external image sensors 114b and 114c. While a single display 120 is shown, it should be appreciated that display 120 may include a stereo pair of displays.
In some examples, in response to a trigger, the electronic device 101 may be configured to display a virtual object 104 in the XR environment represented by a cube illustrated in
It should be understood that virtual object 104 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or other three-dimensional virtual objects) can be included and rendered in a three-dimensional XR environment. For example, the virtual object can represent an application or a user interface displayed in the XR environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the XR environment. In some examples, the virtual object 104 is optionally configured to be interactive and responsive to user input (e.g., air gestures, such as air pinch gestures, air tap gestures, and/or air touch gestures), such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object 104.
In some examples, displaying an object in a three-dimensional environment may include interaction with one or more user interface objects in the three-dimensional environment. For example, initiation of display of the object in the three-dimensional environment can include interaction with one or more virtual options/affordances displayed in the three-dimensional environment. In some examples, a user's gaze may be tracked by the electronic device as an input for identifying one or more virtual options/affordances targeted for selection when initiating display of an object in the three-dimensional environment. For example, gaze can be used to identify one or more virtual options/affordances targeted for selection using another selection input. In some examples, a virtual option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as a touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In some examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non- transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non- transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on compact disc (CD), digital versatile disc (DVD), or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214A, 214B includes multiple displays. In some examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, a transparent or translucent display, etc. In some examples, electronic devices 260 and 270 include touch-sensitive surface(s) 209A and 209B, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214A, 214B and touch-sensitive surface(s) 209A, 209B form touch-sensitive display(s) (e.g., a touch screen integrated with electronic devices 260 and 270, respectively, or external to electronic devices 260 and 270, respectively, that is in communication with electronic devices 260 and 270).
Electronic devices 260 and 270 optionally include image sensor(s) 206A and 206B, respectively. Image sensors(s) 206A/206B optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206A/206B also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206A/206B also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206A/206B also optionally include one or more depth sensors configured to detect the distance of physical objects from electronic device 260/270. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, electronic devices 260 and 270 use CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around electronic devices 260 and 270. In some examples, image sensor(s) 206A/206B include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, electronic device 260/270 uses image sensor(s) 206A/206B to detect the position and orientation of electronic device 260/270 and/or display generation component(s) 214A/214B in the real-world environment. For example, electronic device 260/270 uses image sensor(s) 206A/206B to track the position and orientation of display generation component(s) 214A/214B relative to one or more fixed objects in the real-world environment.
In some examples, electronic device 260/270 includes microphone(s) 213A/213B or other audio sensors. Device 260/270 uses microphone(s) 213A/213B to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213A/213B includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
In some examples, device 260/270 includes location sensor(s) 204A/204B for detecting a location of device 260/270 and/or display generation component(s) 214A/214B. For example, location sensor(s) 204A/204B can include a global positioning system (GPS) receiver that receives data from one or more satellites and allows electronic device 260/270 to determine the device's absolute position in the physical world.
In some examples, electronic device 260/270 includes orientation sensor(s) 210A/210B for detecting orientation and/or movement of electronic device 260/270 and/or display generation component(s) 214A/214B. For example, electronic device 260/270 uses orientation sensor(s) 210A/210B to track changes in the position and/or orientation of electronic device 260/270 and/or display generation component(s) 214A/214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210A/210B optionally include one or more gyroscopes and/or one or more accelerometers.
Electronic device 260/270 includes hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B (and/or other body tracking sensor(s), such as leg, torso, and/or head tracking sensor(s)), in some examples. Hand tracking sensor(s) 202A/202B are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214A/214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212A/212B are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214A/214B. In some examples, hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented together with the display generation component(s) 214A/214B. In some examples, the hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented separate from the display generation component(s) 214A/214B.
In some examples, the hand tracking sensor(s) 202A/202B (and/or other body tracking sensor(s), such as leg, torso, and/or head tracking sensor(s)) can use image sensor(s) 206A/206B (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more body parts (e.g., hands, legs, or torso of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensors 206A/206B are positioned relative to the user to define a field of view of the image sensor(s) 206A/206B and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212A/212B includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by one or more respective eye tracking cameras/illumination sources.
Electronic device 260/270 and system 201 are not limited to the components and configuration of
As shown in
As mentioned above, in some examples, the first electronic device 360 is optionally in a multi-user communication session with the second electronic device 370. For example, the first electronic device 360 and the second electronic device 370 (e.g., via communication circuitry 222A/222B) are configured to present a shared three-dimensional environment 350A/350B that includes one or more shared virtual objects (e.g., content such as images, video, audio and the like, representations of user interfaces of applications, etc.). As used herein, the term “shared three-dimensional environment” refers to a three-dimensional environment that is independently presented, displayed, and/or visible at two or more electronic devices via which content, applications, data, and the like may be shared and/or presented to users of the two or more electronic devices. In some examples, while the first electronic device 360 is in the multi-user communication session with the second electronic device 370, an avatar corresponding to the user of one electronic device is optionally displayed in the three- dimensional environment that is displayed via the other electronic device. For example, as shown in
In some examples, the presentation of avatars 315/317 as part of a shared three-dimensional environment is optionally accompanied by an audio effect corresponding to a voice of the users of the electronic devices 370/360. For example, the avatar 315 displayed in the three-dimensional environment 350A using the first electronic device 360 is optionally accompanied by an audio effect corresponding to the voice of the user of the second electronic device 370. In some such examples, when the user of the second electronic device 370 speaks, the voice of the user may be detected by the second electronic device 370 (e.g., via the microphone(s) 213B) and transmitted to the first electronic device 360 (e.g., via the communication circuitry 222B/222A), such that the detected voice of the user of the second electronic device 370 may be presented as audio (e.g., using speaker(s) 216A) to the user of the first electronic device 360 in three-dimensional environment 350A. In some examples, the audio effect corresponding to the voice of the user of the second electronic device 370 may be spatialized such that it appears to the user of the first electronic device 360 to emanate from the location of avatar 315 in the shared three-dimensional environment 350A (e.g., despite being outputted from the speakers of the first electronic device 360). Similarly, the avatar 317 displayed in the three-dimensional environment 350B using the second electronic device 370 is optionally accompanied by an audio effect corresponding to the voice of the user of the first electronic device 360. In some such examples, when the user of the first electronic device 360 speaks, the voice of the user may be detected by the first electronic device 360 (e.g., via the microphone(s) 213A) and transmitted to the second electronic device 370 (e.g., via the communication circuitry 222A/222B), such that the detected voice of the user of the first electronic device 360 may be presented as audio (e.g., using speaker(s) 216B) to the user of the second electronic device 370 in three-dimensional environment 350B. In some examples, the audio effect corresponding to the voice of the user of the first electronic device 360 may be spatialized such that it appears to the user of the second electronic device 370 to emanate from the location of avatar 317 in the shared three-dimensional environment 350B (e.g., despite being outputted from the speakers of the first electronic device 360).
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with respective orientations that correspond to and/or are based on orientations of the electronic devices 360/370 (and/or the users of electronic devices 360/370) in the physical environments surrounding the electronic devices 360/370. For example, as shown in
Additionally, in some examples, while in the multi-user communication session, a viewpoint of the three-dimensional environments 350A/350B and/or a location of the viewpoint of the three-dimensional environments 350A/350B optionally changes in accordance with movement of the electronic devices 360/370 (e.g., by the users of the electronic devices 360/370). For example, while in the communication session, if the first electronic device 360 is moved closer toward the representation of the table 306′ and/or the avatar 315 (e.g., because the user of the first electronic device 360 moved forward in the physical environment surrounding the first electronic device 360), the viewpoint of the three-dimensional environment 350A would change accordingly, such that the representation of the table 306′, the representation of the window 309′ and the avatar 315 appear larger in the field of view. In some examples, each user may independently interact with the three-dimensional environment 350A/350B, such that changes in viewpoints of the three-dimensional environment 350A and/or interactions with virtual objects in the three-dimensional environment 350A by the first electronic device 360 optionally do not affect what is shown in the three-dimensional environment 350B at the second electronic device 370, and vice versa.
In some examples, the avatars 315/317 are representations (e.g., a full-body rendering) of the users of the electronic devices 370/360. In some examples, the avatar 315/317 is a representation of a portion (e.g., a rendering of a head, face, head and torso, etc.) of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are user-personalized, user-selected, and/or user-created representations displayed in the three-dimensional environments 350A/350B that are representative of the users of the electronic devices 370/360. It should be understood that, while the avatars 315/317 illustrated in
As mentioned above, while the first electronic device 360 and the second electronic device 370 are in the multi-user communication session, the three-dimensional environments 350A/350B may be a shared three-dimensional environment that is presented using the electronic devices 360/370. In some examples, content that is viewed by one user at one electronic device may be shared with another user at another electronic device in the multi-user communication session. In some such examples, the content may be experienced (e.g., viewed and/or interacted with) by both users (e.g., via their respective electronic devices) in the shared three-dimensional environment. For example, as shown in
In some examples, the three-dimensional environments 350A/350B include unshared content that is private to one user in the multi-user communication session. For example, in
As mentioned previously above, in some examples, the user of the first electronic device 360 and the user of the second electronic device 370 are in a spatial group 340 within the multi-user communication session. In some examples, the spatial group 340 may be a baseline (e.g., a first or default) spatial group within the multi-user communication session. For example, when the user of the first electronic device 360 and the user of the second electronic device 370 initially join the multi-user communication session, the user of the first electronic device 360 and the user of the second electronic device 370 are automatically (and initially, as discussed in more detail below) associated with (e.g., grouped into) the spatial group 340 within the multi-user communication session. In some examples, while the users are in the spatial group 340 as shown in
It should be understood that, in some examples, more than two electronic devices may be communicatively linked in a multi-user communication session. For example, in a situation in which three electronic devices are communicatively linked in a multi-user communication session, a first electronic device would display two avatars, rather than just one avatar, corresponding to the users of the other two electronic devices. It should therefore be understood that the various processes and exemplary interactions described herein with reference to the first electronic device 360 and the second electronic device 370 in the multi-user communication session optionally apply to situations in which more than two electronic devices are communicatively linked in a multi-user communication session.
In some examples, it may be advantageous to provide mechanisms for facilitating a multi-user communication session that includes collocated and non-collocated users (e.g., collocated and non-collocated electronic devices associated with the users). For example, it may be desirable to enable users who are collocated in a first physical environment to establish a multi-user communication session with one or more users who are non-collocated in the first physical environment, such that virtual content may be shared and presented in a three- dimensional environment that is optionally viewable by and/or interactive to the collocated and non-collocated users in the multi-user communication session. As used herein, relative to a first electronic device, a collocated user corresponds to a local user and a non-collocated user corresponds to a remote user. As similarly discussed above, the three-dimensional environment optionally includes avatars corresponding to the remote users of the electronic devices that are non-collocated in the multi-user communication session. In some examples, as discussed below, the presentation of virtual objects (e.g., avatars and shared virtual content) in the three-dimensional environment within a multi-user communication session that includes collocated and non-collocated users (e.g., relative to a first electronic device) is based on positions and/or orientations of the collocated users in a physical environment of the first electronic device.
In
In some examples, the three-dimensional environments 450A/450B include captured portions of the physical environment 400 in which the electronic devices 460/470 are located. For example, because the first electronic device 101a and the second electronic device 101b are collocated in the physical environment 400, the three-dimensional environments 450A and 450B include the stand 408 (e.g., a representation of the stand) and the houseplant 409 (e.g., a representation of the houseplant), but from the viewpoints of the first electronic device 101a and the second electronic device 101b, as shown in
As described above with reference to
As similarly described above with reference to
In
In some examples, the third electronic device is non-collocated with the first electronic device 101a and the second electronic device 101b. For example, as shown in overhead view 412 in
In some examples, when the first electronic device 101a and the second electronic device 101b detect the indication discussed above, the first electronic device 101a and the second electronic device 101b display message element 420 (e.g., a notification) corresponding to the request to join the multi-user communication session with the third electronic device 101c. In some examples, as shown in
In
In some examples, in response to detecting the input accepting the request to join the multi-user communication session with the third electronic device 101c, the first electronic device 101a and the second electronic device 101b initiate a process for presenting an avatar corresponding to the third user 406 of the third electronic device 101c in the three-dimensional environments 450A and 450B, indicative of entering the multi-user communication session with the third electronic device 101c. For example, as mentioned above, because the third user 406 is non-collocated with the first user 402 and the second user 404 in the physical environment 400, the third user 406 is represented via an avatar (or other virtual representation) in the three-dimensional environment 450A/450B while in the multi-user communication session. In some examples, as discussed below, initiating the process for presenting the avatar corresponding to the third user 406 in the three-dimensional environment 450A/450B includes identifying a placement location for the avatar within the first spatial group of the first user 402 and the second user 404.
In some examples, as shown in
In some examples, the origin 431 (e.g., and thus the shared coordinate system) discussed above is defined based on the physical environment 400 (e.g., the physical room in which the first electronic device 101a and the second electronic device 101b are located). In some examples, the first electronic device 101a and the second electronic device 101b are each configured to analyze the physical environment 400 to determine the origin 431 (e.g., and the shared coordinate system) based on Simultaneous Localization and Mapping (SLAM) data exchanged between the first electronic device 101a and the second electronic device 101b (e.g., SLAM data individually stored on the electronic devices 101a and 101b or SLAM data stored on one of the electronic devices 101a and 101b). For example, the first electronic device 101a and the second electronic device 101b utilize the SLAM data to facilitate shared understanding of one or more physical properties of the physical environment 400, such as dimensions of the physical environment, physical objects within the physical environment, a visual appearance (e.g., color and lighting characteristics) of the physical environment, etc., according to which the origin 431 may be defined in the first spatial group. In some examples, the first electronic device 101a and the second electronic device 101b are each configured to analyze the physical environment 400 to determine the origin 431 based on one or more characteristics of the other electronic device as perceived by the electronic devices individually. For example, based on one or more images captured via the external image sensors 114b-i and 114c-i, the first electronic device 101a analyzes a position of the second electronic device 101b in the physical environment relative to the viewpoint of the first electronic device 101a and, based on one or more images captured via the external image sensors 114b-ii and 114c-ii, the second electronic device 101b analyzes a position of the first electronic device 101a in the physical environment 400 relative to the viewpoint of the second electronic device 101b to establish spatial truth within the first spatial group and thus define the origin 431.
In some examples, when the first electronic device 101a and the second electronic device 101b identify a placement location for the avatar corresponding to the third user 406 in the first spatial group, as shown in the overhead view 410, the first electronic device 101a and the second electronic device 101b analyze/identify one or more physical properties of the physical environment 400. For example, as discussed above, the physical environment 400 includes stand 408 (e.g., including houseplant 409). In some examples, the first electronic device 101a and the second electronic device 101b select a placement location for the avatar corresponding to the user of the third electronic device based on a location of the stand 408 in the physical environment 400. For example, the location at which the avatar corresponding to the third user is positioned in the shared three-dimensional environment is selected to not correspond to the location of the stand 408 in the physical environment 400.
In some examples, as shown in
In some examples, the when the first electronic device 101a and the second electronic device 101b identify a placement location for the avatar corresponding to the third user 406 in the first spatial group, as shown in the overhead view 410, the first electronic device 101a and the second electronic device 101b analyze/identify one or more properties of the shared three-dimensional environment. In some examples, the one or more properties of the shared three-dimensional environment correspond to the presence of virtual objects (e.g., shared content) currently displayed in the shared three-dimensional environment (e.g., none of which exist in the current example of
In some examples, the first electronic device 101a and the second electronic device 101b select/coordinate a placement location for the avatar corresponding to the third user based on any one or combination of the factors described above. In
As indicated in the overhead view 410 in
In some examples, as shown in the overhead view 412 in
In some examples, the above-described methods for selecting a placement location for the avatar corresponding to the third user 406 may similarly be utilized for selecting a placement location for virtual content that is shared within the multi-user communication session. For example, in
In
In some examples, in response to detecting the selection of the selectable option 426, the first electronic device 101a initiates a process to display a shared virtual object in the shared three-dimensional environment. In some examples, as mentioned above, the first electronic device 101a and the second electronic device 101b coordinate to select a placement location for the shared virtual object within the shared three-dimensional environment (e.g., based on the spatial arrangement of the first electronic device 101a and the second electronic device 101b in the first spatial group). Particularly, as described above, the first electronic device 101a and the second electronic device 101b select a placement location for the shared virtual object based on positions of the first electronic device 101a and the second electronic device 101b relative to the origin 431, an average forward direction of the first electronic device 101a and the second electronic device 101b, locations of physical objects in the physical environment 400 (e.g., such as the location of the stand 408), seat locations within a spatial template for the first spatial group, and/or locations of other virtual objects currently displayed in the shared three-dimensional environment (e.g., such as avatars or other application windows). Additionally, in some examples, the first electronic device 101a and the second electronic device 101b select a placement location for the shared virtual object based on object type. For example, the object type is based on an orientation of the shared virtual object, such as whether the object is a vertically oriented object or a horizontally oriented object, as discussed in more detail herein later.
In some examples, when the first electronic device 101a and the second electronic device 101b select a placement location for the shared virtual object in the shared three-dimensional environment (e.g., within the first spatial group), the first electronic device 101a and the second electronic device 101b display the shared virtual object at the selected placement location. For example, as shown in
In some examples, the above-described methods for selecting a placement location for the avatar corresponding to the third user 406 or the shared application window 435 may similarly be utilized for selecting placement locations for avatars corresponding to a group of remote users that joins the multi-user communication session. In
In
In
In some examples, in response to detecting the input indicating the acceptance of the request to enter the multi-user communication session with the group of remote users, the first electronic device 101a (e.g., and the second electronic device 101b and the third electronic device 101c) initiate a process to display a plurality of avatars corresponding to the group of remote users in the shared three-dimensional environment. In some examples, because a group of remote users, rather than a single remote user, is entering the multi-user communication session with the first user 402, the second user 404, and the third user 406, the placement locations for the avatars corresponding to the group of remote users is determined based on a spatial arrangement of the remote users as well as being determined based on at least the spatial arrangement of the first user 402, the second user 404, and the third user 406 (e.g., as discussed previously above with reference to
Additionally, in some examples, the placement locations for the avatars corresponding to the group of remote users in the shared environment of the group of local users (e.g., the first user 402, the second user 404, and the third user 406) is determined based on an average forward direction of the electronic devices associated with the remote users. For example, as shown in the overhead view 412 in
In some examples, according to the methods discussed previously above, the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c determine average forward direction 432a for the first spatial group based on the individual orientations of the electronic devices in the physical environment 400. Additionally, as previously discussed herein, the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c identify locations of the electronic devices and/or distances between the electronic devices relative to origin 431a in the first spatial group.
In some examples, data corresponding to the locations of and/or distances between the electronic devices associated with the group of remote users within the second spatial group 445 is transferred to the first electronic device 101a, the second electronic device 101b, and/or the third electronic device 101c (e.g., directly from each electronic device or via a wireless server). Additionally or alternatively, in some examples, data corresponding to the orientations of and/or the average forward direction of the electronic devices associated with the group of remote users within the second spatial group 445 is transferred to the first electronic device 101a, the second electronic device 101b, and/or the third electronic device 101c (e.g., directly from each electronic device or via a wireless server). In some examples, the first electronic device 101a, the second electronic device 101b, and/or the third electronic device 101c utilize the above data to determine the placement locations for the avatars corresponding to the group of remote users in coordination with the spatial arrangement of the first spatial group and/or the average forward direction of the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c.
In some examples, as shown in
In some examples, when the avatars 415, 417, and 419 corresponding to the group of remote users are displayed in the three-dimensional environment 450A at the first electronic device 101a (e.g., and in the respective three-dimensional environments at the second electronic device 101b and the third electronic device 101c), a spatial arrangement of the avatars 415, 417, and 419 within the three-dimensional environment 450A corresponds to the spatial arrangement of the fourth user 414, the fifth user 416, and the sixth user 418 within the second spatial group 445 in
In some examples, when the avatars 415, 417, and 419 corresponding to the group of remote users are displayed in the three-dimensional environment 450A at the first electronic device 101a (e.g., and in the respective three-dimensional environments at the second electronic device 101b and the third electronic device 101c), positions of the avatars 415, 417, and 419 within the three-dimensional environment 450A are selected based on the average forward direction 432b of the electronic devices associated with the group of remote users. Particularly, as shown in the overhead view 410 in
Additionally, in some examples, when the fourth electronic device 101d, the fifth electronic device 101e, and the sixth electronic device 101f enter the multi-user communication session with the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c, the fourth electronic device 101d, the fifth electronic device 101e, and the sixth electronic device 101f generate and display avatars corresponding to the first user 402, the second user 404, and the third user 406 in the respective three-dimensional environments at the fourth electronic device 101d, the fifth electronic device 101e, and the sixth electronic device 101f. For example, as illustrated in the overhead view 412 in
In some examples, as similarly described above, when the avatars 452, 454, and 456 are displayed in the respective three-dimensional environments at the fourth electronic device 101d, the fifth electronic device 101e, and the sixth electronic device 101f, the avatars 452, 454, and 456 are displayed with a spatial arrangement that is based on the spatial arrangement of the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c in the first spatial group discussed previously above. For example, as shown in the overhead view 412 in
Accordingly, as outlined above, providing systems and methods for displaying virtual objects (e.g., avatars and/or virtual content) in a shared three-dimensional environment while in a multi-user communication session advantageously enables collocated and non-collocated users to participate in the multi-user communication session and experience synchronized interaction with content and other users, thereby improving user-device interaction. Additionally, automatically determining location(s) at which to display the virtual objects (e.g., avatars and/or virtual content) in the shared three-dimensional environment reduces and/or helps avoid user input for manually selecting the location(s) in the shared three-dimensional environment, which helps conserve computing resources that would otherwise be consumed to respond to such user input, as another benefit. Attention is now directed toward additional examples of displaying virtual objects (e.g., avatars and/or virtual content) within a multi-user communication session that includes collocated and non-collocated users and electronic devices.
As shown in
In the example of
In some examples, as previously discussed herein, when the first electronic device 101a (e.g., and/or the second electronic device 101b and the third electronic device 101c) receives the indication of the request to enter the multi-user communication session, the first electronic device 101a, the second electronic device 101b, and the third electronic device 101c establish/form a spatial group corresponding to a shared three-dimensional environment within which virtual objects (e.g., avatars and/or virtual content) will be displayed (e.g., according to a shared/synchronized coordinate space, such as relative to (e.g., centered on) origin 531). In some examples, as mentioned previously above, the spatial group may be associated with a spatial template that indicates a plurality or number of seats which participants in the multi-user communication session can occupy. In some examples, the spatial template is determined based on one or more physical properties of the physical environment 500 in which the electronic devices 101a, 101b, and 101c are located. For example, the first electronic device 101a (e.g., and the second electronic device 101b and the third electronic device 101c) define the spatial template for the spatial group based on a size of the physical environment 500 and/or physical objects in the physical environment 500, such as the table 508 and the plurality of chairs 547. In the example of
In some examples, when a spatial template that is based on the physical space surrounding the first electronic device 101a (e.g., and the second electronic device 101b and the third electronic device 101c) is utilized for arranging participants (e.g., remote and local users) in a multi-user communication session, the first electronic device 101a (e.g., and the second electronic device 101b and the third electronic device 101c) provides visual indications of the seats 530 within the spatial template. For example, because the seats 530 are defined to correspond to the chairs 547 and the ends of the table 508, as indicated in the overhead view 510, the first electronic device 101a displays visual indications 536 in the three-dimensional environment 550A at locations corresponding to the seats 530. In some examples, the visual indications 536 correspond to highlighting effects, glowing effects, sparkling effects, or other animation effects. The visual indications 536 optionally thus provide guidance and/or suggestion to the first user 502, the second user 504, and the third user 506 for arranging themselves at locations in the physical environment 500 that correspond to seats 530 within the spatial template of the spatial group for the multi-user communication session.
In
As shown in the overhead view 510 in
In some examples, as similarly discussed, while users are participating in a multi-user communication session, content can be shared within the shared three-dimensional environment, where the content is viewable by and/or interactive to the users in the multi-user communication session. For example, in
In some examples, a placement location for the shared content is selected to be in the direction of the average forward direction 532, as previously discussed above. For example, as shown in the overhead view 510 in
In some examples, in addition to the placement location for the shared content being selected based on one or more physical properties of the physical environment 500, the placement location for the shared content may be selected based on one or more properties of the shared content itself. For example, as mentioned previously above, the one or more properties of the shared content include an orientation associated with the shared content. In
Accordingly, as outlined above, providing systems and methods for displaying virtual content in a shared three-dimensional environment while in a multi-user communication session advantageously enables collocated and non-collocated users to participate in the multi-user communication session and experience synchronized interaction with the virtual content, thereby improving user-device interaction. Additionally, automatically determining location(s) at which to display the virtual content in the shared three-dimensional environment (e.g., based on one or more properties of a physical environment of the users or one or more properties of the virtual content) reduces and/or helps avoid user input for manually selecting the location(s) in the shared three-dimensional environment, which helps conserve computing resources that would otherwise be consumed to respond to such user input, as another benefit.
It is understood that the examples shown and described herein are merely exemplary and that additional and/or alternative elements may be provided within the three-dimensional environment for interacting with the illustrative content. It should be understood that the appearance, shape, form and size of each of the various user interface elements and objects shown and described herein are exemplary and that alternative appearances, shapes, forms and/or sizes may be provided. For example, the virtual objects representative of application windows (e.g., virtual objects 330, 435, 535 and 537) may be provided in an alternative shape than a rectangular shape, such as a circular shape, triangular shape, etc. In some examples, the various selectable options (e.g., options 421 and 422), user interface elements (e.g., message element 420 or user interface element 424), etc. described herein may be selected verbally via user verbal commands (e.g., “select option” verbal command). Additionally or alternatively, in some examples, the various options, user interface elements, control elements, etc. described herein may be selected and/or manipulated via user input received via one or more separate input devices in communication with the electronic device(s). For example, selection input may be received via physical input devices, such as a mouse, trackpad, keyboard, etc. in communication with the electronic device(s).
In some examples, at 604, in response to detecting the indication, the first electronic device enters the communication session that includes the first electronic device, the second electronic device, and the third electronic device. For example, as described with reference to
In some examples, at 608, the first electronic device obtains second data corresponding to an orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment. For example, in the overhead view 410 in
It is understood that process 600 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 600 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method comprising, at a first electronic device in communication with one or more displays and one or more input devices, wherein the first electronic device is collocated with a second electronic device in a first physical environment, detecting an indication of a request to enter a communication session with a third electronic device, wherein the third electronic device is non-collocated in the first physical environment, and in response to detecting the indication, entering the communication session that includes the first electronic device, the second electronic device, and the third electronic device, including: obtaining first data corresponding to a location of a user of the second electronic device relative to a viewpoint of the first electronic device in the first physical environment; obtaining second data corresponding to an orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment; and displaying, via the one or more displays, a visual representation of a user of the third electronic device at a second location in a computer-generated environment, the second location determined based on the first data and the second data.
Additionally or alternatively, in some examples, the first electronic device being collocated with the second electronic device in the first physical environment is in accordance with a determination that the second electronic device is within a threshold distance of the first electronic device in the first physical environment. Additionally or alternatively, in some examples, the third electronic device being non-collocated with the first electronic device in the first physical environment is in accordance with a determination that the third electronic device is more than the threshold distance of the first electronic device in the first physical environment. Additionally or alternatively, in some examples, the second electronic device being collocated with the first electronic device in the first physical environment is in accordance with a determination that the second electronic device is located in a field of view of the first electronic device. Additionally or alternatively, in some examples, the third electronic device being non- collocated with the first electronic device in the first physical environment is in accordance with a determination that the third electronic device is located in a second physical environment, different from the first physical environment. Additionally or alternatively, in some examples, the first data corresponding to the location of the user of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment and the second data corresponding to the orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment are obtained based on one or more images of the first physical environment captured via one or more cameras of the first electronic device. Additionally or alternatively, in some examples, obtaining the first data corresponding to the location of the user of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment includes receiving the first data from the second electronic device, and obtaining second data corresponding to an orientation of the second electronic device relative to the viewpoint of the first electronic device in the first physical environment includes receiving the second data from the second electronic device.
Additionally or alternatively, in some examples, after entering the communication session that includes the first electronic device, the second electronic device, and the third electronic device, the user of the second electronic device is visible in a field of view of the first electronic device. Additionally or alternatively, in some examples, entering the communication session that includes the first electronic device, the second electronic device, and the third electronic device further includes determining an average orientation of electronic devices in the communication session based on the orientation of the second electronic device and an orientation of the first electronic device in the first physical environment. Additionally or alternatively, in some examples, the second location is further determined based on the average orientation of the electronic devices. Additionally or alternatively, in some examples, the second location is further determined based on one or more physical characteristics of the first physical environment. Additionally or alternatively, in some examples, the one or more physical characteristics of the first physical environment include one or more locations of one or more physical objects in the first physical environment. Additionally or alternatively, in some examples, the second location is further determined based on one or more characteristics of the computer-generated environment. Additionally or alternatively, in some examples, the one or more characteristics of the computer-generated environment include one or more locations of one or more virtual objects in the computer-generated environment.
Additionally or alternatively, in some examples, prior to detecting the indication of the request to enter the communication session, the first electronic device is in communication with a fourth electronic device, the first electronic device being collocated with the fourth electronic device in the first physical environment, after entering the communication session, the communication session includes the first electronic device, the second electronic device, the third electronic device, and the fourth electronic device, and entering the communication session further comprises: obtaining third data corresponding to a location of a user of the fourth electronic device relative to the viewpoint of the first electronic device in the first physical environment; and obtaining fourth data corresponding to an orientation of the fourth electronic device relative to the viewpoint of the first electronic device in the first physical environment, wherein the second location is determined based on the first data, the second data, the third data, and the fourth data. Additionally or alternatively, in some examples, the method further comprises: detecting a second indication of a request to add a fourth electronic device to the communication session, wherein the fourth electronic device is non-collocated in the first physical environment; and in response to detecting the second indication, displaying, via the one or more displays, a visual representation of a user of the fourth electronic device at a third location, different from the second location, in the computer-generated environment, the third location based on the first data, the second data, and the second location at which the visual representation of the user of the third electronic device is displayed. Additionally or alternatively, in some examples, the method further comprises: after entering the communication session, detecting a third indication of a request to display shared content in the computer-generated environment; and in response to detecting the third indication, displaying, via the one or more displays, a first object corresponding to the shared content at a fourth location, different from the second location, in the computer-generated environment, the fourth location based on the first data, the second data, and the second location at which the visual representation of the user of the third electronic device is displayed. Additionally or alternatively, in some examples, in accordance with a determination that the shared content is a first type of content, the fourth location is a first respective location in the computer-generated environment, and in accordance with a determination that the shared content is a second type of content, different from the first type of content, the fourth location is a second respective location, different from the first respective location, in the computer-generated environment.
Additionally or alternatively, in some examples, the indication of the request to enter the communication session with the third electronic device corresponds to an indication of a request to enter the communication session with the third electronic device and a fifth electronic device, wherein the fifth electronic device is collocated with the third electronic device in a second physical environment, different from the first physical environment. Additionally or alternatively, in some examples, the method further comprises, in response to detecting the indication, displaying, via the one or more displays, a visual representation of a user of the fifth electronic device at a fifth location, different from the second location, in the computer-generated environment, the fifth location based on the first data and the second data. Additionally or alternatively, in some examples, the third electronic device is separated from the fifth electronic device by a first distance in the second physical environment, and the fifth location is separated from the second location by the first distance in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, in response to detecting the indication: determining a first average orientation of electronic devices in the first physical environment based on the orientation of the second electronic device and an orientation of the first electronic device; and determining a second average orientation of electronic devices in the second physical environment based on an orientation of the third electronic device and an orientation of the fifth electronic device; wherein the second location and the fifth location are further determined based on aligning the first average orientation and the second average orientation. Additionally or alternatively, in some examples, detecting the indication of the request to enter the communication session with the third electronic device includes detecting, via the one or more input devices, an input corresponding to a request to initiate a communication session with the second electronic device and the third electronic device. Additionally or alternatively, in some examples, the indication of the request to enter the communication session with the third electronic device corresponds to user input detected by the second electronic device or the third electronic device for initiating a communication session with the first electronic device and the third electronic device or the first electronic device and the second electronic device.
Some examples of the disclosure are directed to a first electronic device comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of a first electronic device, cause the first electronic device to perform any of the above methods.
Some examples of the disclosure are directed to a first electronic device, comprising one or more processors, memory, and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in a first electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the disclosure and its practical applications, to thereby enable others skilled in the art to best use the disclosure and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/614,486 filed Dec. 22, 2023, the entire disclosure of which is herein incorporated by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
63614486 | Dec 2023 | US |