This relates generally to systems and methods of managing spatial groups within multi-user communication sessions.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some examples, the three-dimensional environments are presented by multiple devices communicating in a multi-user communication session. In some examples, an avatar (e.g., a representation) of each user participating in the multi-user communication session (e.g., via the computing devices) is displayed in the three-dimensional environment of the multi-user communication session. In some examples, content can be shared in the three-dimensional environment for viewing and interaction by multiple users participating in the multi-user communication session.
Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a multi-user communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device, a second electronic device, and a third electronic device are communicatively linked in a multi-user communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device and an avatar corresponding to a user of the third electronic device, wherein the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device are separated by a first distance. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, in accordance with a determination that the shared content is a first type of content, the first electronic device displays a first object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device updates the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device, such that the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device are separated by a second distance different from the first distance. In some examples, in accordance with a determination that the shared content is a second type of content, different from the first type, the first electronic device displays a second object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device maintains display of the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device to be separated by the first distance.
In some examples, a first electronic device, a second electronic device, and a third electronic device are communicatively linked in a multi-user communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device and an avatar corresponding to a user of the third electronic device, wherein the avatar corresponding to the user of the second electronic device is displayed at a first location and the avatar corresponding to the user of the third electronic device is displayed at a second location relative to a viewpoint of the first electronic device. In some examples, in response to detecting an input corresponding to a request to display content in the three-dimensional environment, in accordance with a determination that the content corresponds to shared content, the first electronic device displays a first object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device moves the avatar corresponding to the user of the second electronic device to a first updated location and moves the avatar corresponding to the user of the third electronic device to a second updated location, different from the first updated location, relative to the viewpoint of the first electronic device. In some examples, the first electronic device moves the avatars in a respective direction that is based on a location of the first object in the three-dimensional environment.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for managing locations of users in a spatial group within a multi-user communication session based on the display of shared content in a three-dimensional environment. In some examples, a first electronic device, a second electronic device, and a third electronic device are communicatively linked in a multi-user communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device and an avatar corresponding to a user of the third electronic device, wherein the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device are separated by a first distance. In some examples, in response to detecting an input corresponding to a request to display shared content in the three-dimensional environment, in accordance with a determination that the shared content is a first type of content, the first electronic device displays a first object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device updates the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device, such that the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device are separated by a second distance different from the first distance. In some examples, in accordance with a determination that the shared content is a second type of content, different from the first type, the first electronic device displays a second object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device maintains display of the avatar corresponding to the user of the second electronic device and the avatar corresponding to the user of the third electronic device to be separated by the first distance.
In some examples, a first electronic device, a second electronic device, and a third electronic device are communicatively linked in a multi-user communication session. In some examples, the first electronic device displays a three-dimensional environment including an avatar corresponding to a user of the second electronic device and an avatar corresponding to a user of the third electronic device, wherein the avatar corresponding to the user of the second electronic device is displayed at a first location and the avatar corresponding to the user of the third electronic device is displayed at a second location relative to a viewpoint of the first electronic device. In some examples, in response to detecting an input corresponding to a request to display content in the three-dimensional environment, in accordance with a determination that the content corresponds to shared content, the first electronic device displays a first object corresponding to the shared content in the three-dimensional environment. In some examples, the first electronic device moves the avatar corresponding to the user of the second electronic device to a first updated location and moves the avatar corresponding to the user of the third electronic device to a second updated location, different from the first updated location, relative to the viewpoint. In some examples, the first electronic device moves the avatars in a respective direction that is based on a location of the first object in the three-dimensional environment.
In some examples, a plurality of users in a multi-user communication session has or is associated with a spatial group that dictates locations of one or more users and/or content in a shared three-dimensional environment. In some examples, users that share the same spatial group within the multi-user communication session experience spatial truth (e.g., defined later herein) according to a spatial arrangement of the users (e.g., distances between adjacent users) in the spatial group. In some examples, when a user of a first electronic device shares a spatial arrangement with a user of a second electronic device, the users experience spatial truth relative to three-dimensional representations (e.g., avatars) corresponding to the users in their respective three-dimensional environments.
In some examples, displaying (e.g., sharing) content in the three-dimensional environment while in the multi-user communication session may include interaction with one or more user interface elements. In some examples, a user's gaze may be tracked by the electronic device as an input for targeting a selectable option/affordance within a respective user interface element that is displayed in the three-dimensional environment. For example, gaze can be used to identify one or more options/affordances targeted for selection using another selection input. In some examples, a respective option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
It should be understood that virtual object 110 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or three-dimensional virtual objects) can be included and rendered in a three-dimensional computer-generated environment. For example, the virtual object can represent an application, or a user interface displayed in the computer-generated environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the computer-generated environment. In some examples, the virtual object 110 is optionally configured to be interactive and responsive to user input, such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object. In some examples, the virtual object 110 may be displayed in a three-dimensional computer-generated environment within a multi-user communication session (“multi-user communication session,” “communication session”). In some such examples, as described in more detail below, the virtual object 110 may be viewable and/or configured to be interactive and responsive to multiple users and/or user input provided by multiple users, respectively. Additionally, it should be understood, that the 3D environment (or 3D virtual object) described herein may be a representation of a 3D environment (or three-dimensional virtual object) projected or presented at an electronic device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as a touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display, and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In some examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214A, 214B includes multiple displays, such as a stereo pair of displays. In some examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, a transparent or translucent display, etc. In some examples, electronic devices 260 and 270 include touch-sensitive surface(s) 209A and 209B, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214A, 214B and touch-sensitive surface(s) 209A, 209B form touch-sensitive display(s) (e.g., a touch screen integrated with electronic devices 260 and 270, respectively, or external to electronic devices 260 and 270, respectively, that is in communication with electronic devices 260 and 270).
Electronic devices 260 and 270 optionally include image sensor(s) 206A and 206B, respectively. Image sensors(s) 206A/206B optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206A/206B also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206A/206B also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206A/206B also optionally include one or more depth sensors configured to detect the distance of physical objects from device 260/270. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, electronic devices 260 and 270 use CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around electronic devices 260 and 270. In some examples, image sensor(s) 206A/206B include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, device 260/270 uses image sensor(s) 206A/206B to detect the position and orientation of device 260/270 and/or display generation component(s) 214A/214B in the real-world environment. For example, device 260/270 uses image sensor(s) 206A/206B to track the position and orientation of display generation component(s) 214A/214B relative to one or more fixed objects in the real-world environment.
In some examples, device 260/270 includes microphone(s) 213A/213B or other audio sensors. Device 260/270 uses microphone(s) 213A/213B to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213A/213B includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
In some examples, device 260/270 includes location sensor(s) 204A/204B for detecting a location of device 260/270 and/or display generation component(s) 214A/214B. For example, location sensor(s) 204A/204B can include a global positions system (GPS) receiver that receives data from one or more satellites and allows device 260/270 to determine the device's absolute position in the physical world.
In some examples, device 260/270 includes orientation sensor(s) 210A/210B for detecting orientation and/or movement of device 260/270 and/or display generation component(s) 214A/214B. For example, device 260/270 uses orientation sensor(s) 210A/210B to track changes in the position and/or orientation of device 260/270 and/or display generation component(s) 214A/214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210A/210B optionally include one or more gyroscopes and/or one or more accelerometers.
Device 260/270 includes hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B, in some examples. Hand tracking sensor(s) 202A/202B are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214A/214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212A/212B are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214A/214B. In some examples, hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented together with the display generation component(s) 214A/214B. In some examples, the hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented separate from the display generation component(s) 214A/214B.
In some examples, the hand tracking sensor(s) 202A/202B can use image sensor(s) 206A/206B (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensor(s) 206A/206B are positioned relative to the user to define a field of view of the image sensor(s) 206A/206B and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212A/212B includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 260/270 and system 201 are not limited to the components and configuration of
As shown in
As mentioned above, in some examples, the first electronic device 360 is optionally in a multi-user communication session with the second electronic device 370. For example, the first electronic device 360 and the second electronic device 370 (e.g., via communication circuitry 222A/222B) are configured to present a shared three-dimensional environment 350A/350B that includes one or more shared virtual objects (e.g., content such as images, video, audio and the like, representations of user interfaces of applications, etc.). As used herein, the term “shared three-dimensional environment” refers to a three-dimensional environment that is independently presented, displayed, and/or visible at two or more electronic devices via which content, applications, data, and the like may be shared and/or presented to users of the two or more electronic devices. In some examples, while the first electronic device 360 is in the multi-user communication session with the second electronic device 370, an avatar corresponding to the user of one electronic device is optionally displayed in the three-dimensional environment that is displayed via the other electronic device. For example, as shown in
In some examples, the presentation of avatars 315/317 as part of a shared three-dimensional environment is optionally accompanied by an audio effect corresponding to a voice of the users of the electronic devices 370/360. For example, the avatar 315 displayed in the three-dimensional environment 350A using the first electronic device 360 is optionally accompanied by an audio effect corresponding to the voice of the user of the second electronic device 370. In some such examples, when the user of the second electronic device 370 speaks, the voice of the user may be detected by the second electronic device 370 (e.g., via the microphone(s) 213B) and transmitted to the first electronic device 360 (e.g., via the communication circuitry 222B/222A), such that the detected voice of the user of the second electronic device 370 may be presented as audio (e.g., using speaker(s) 216A) to the user of the first electronic device 360 in three-dimensional environment 350A. In some examples, the audio effect corresponding to the voice of the user of the second electronic device 370 may be spatialized such that it appears to the user of the first electronic device 360 to emanate from the location of avatar 315 in the shared three-dimensional environment 350A (e.g., despite being outputted from the speakers of the first electronic device 360). Similarly, the avatar 317 displayed in the three-dimensional environment 350B using the second electronic device 370 is optionally accompanied by an audio effect corresponding to the voice of the user of the first electronic device 360. In some such examples, when the user of the first electronic device 360 speaks, the voice of the user may be detected by the first electronic device 360 (e.g., via the microphone(s) 213A) and transmitted to the second electronic device 370 (e.g., via the communication circuitry 222A/222B), such that the detected voice of the user of the first electronic device 360 may be presented as audio (e.g., using speaker(s) 216B) to the user of the second electronic device 370 in three-dimensional environment 350B. In some examples, the audio effect corresponding to the voice of the user of the first electronic device 360 may be spatialized such that it appears to the user of the second electronic device 370 to emanate from the location of avatar 317 in the shared three-dimensional environment 350B (e.g., despite being outputted from the speakers of the first electronic device 360).
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with respective orientations that correspond to and/or are based on orientations of the electronic devices 360/370 (and/or the users of electronic devices 360/370) in the physical environments surrounding the electronic devices 360/370. For example, as shown in
Additionally, in some examples, while in the multi-user communication session, a viewpoint of the three-dimensional environments 350A/350B and/or a location of the viewpoint of the three-dimensional environments 350A/350B optionally changes in accordance with movement of the electronic devices 360/370 (e.g., by the users of the electronic devices 360/370). For example, while in the communication session, if the first electronic device 360 is moved closer toward the representation of the table 306′ and/or the avatar 315 (e.g., because the user of the first electronic device 360 moved forward in the physical environment surrounding the first electronic device 360), the viewpoint of the three-dimensional environment 350A would change accordingly, such that the representation of the table 306′, the representation of the window 309′ and the avatar 315 appear larger in the field of view. In some examples, each user may independently interact with the three-dimensional environment 350A/350B, such that changes in viewpoints of the three-dimensional environment 350A and/or interactions with virtual objects in the three-dimensional environment 350A by the first electronic device 360 optionally do not affect what is shown in the three-dimensional environment 350B at the second electronic device 370, and vice versa.
In some examples, the avatars 315/317 are a representation (e.g., a full-body rendering) of the users of the electronic devices 370/360. In some examples, the avatar 315/317 is a representation of a portion (e.g., a rendering of a head, face, head and torso, etc.) of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are a user-personalized, user-selected, and/or user-created representation displayed in the three-dimensional environments 350A/350B that is representative of the users of the electronic devices 370/360. It should be understood that, while the avatars 315/317 illustrated in
As mentioned above, while the first electronic device 360 and the second electronic device 370 are in the multi-user communication session, the three-dimensional environments 350A/350B may be a shared three-dimensional environment that is presented using the electronic devices 360/370. In some examples, content that is viewed by one user at one electronic device may be shared with another user at another electronic device in the multi-user communication session. In some such examples, the content may be experienced (e.g., viewed and/or interacted with) by both users (e.g., via their respective electronic devices) in the shared three-dimensional environment. For example, as shown in
In some examples, the three-dimensional environments 350A/350B include unshared content that is private to one user in the multi-user communication session. For example, in
As mentioned previously above, in some examples, the user of the first electronic device 360 and the user of the second electronic device 370 are associated with a spatial group 340 within the multi-user communication session. In some examples, the spatial group 340 controls locations at which the users and/or content are (e.g., initially) positioned in the shared three-dimensional environment. For example, the spatial group 340 may be a baseline (e.g., a first or default) spatial group within the multi-user communication session. For example, when the user of the first electronic device 360 and the user of the second electronic device 370 initially join the multi-user communication session, the user of the first electronic device 360 and the user of the second electronic device 370 are automatically (and initially, as discussed in more detail below) positioned according to the spatial group 340 within the multi-user communication session. In some examples, while the users are in the spatial group 340 as shown in
It should be understood that, in some examples, more than two electronic devices may be communicatively linked in a multi-user communication session. For example, in a situation in which three electronic devices are communicatively linked in a multi-user communication session, a first electronic device would display two avatars, rather than just one avatar, corresponding to the users of the other two electronic devices. It should therefore be understood that the various processes and exemplary interactions described herein with reference to the first electronic device 360 and the second electronic device 370 in the multi-user communication session optionally apply to situations in which more than two electronic devices are communicatively linked in a multi-user communication session.
In some examples, it may be advantageous to selectively alter (e.g., update relative to the first, default spatial group) the spatial group of users in a multi-user communication session based on content that is displayed in the three-dimensional environment, including updating display of the avatars corresponding to the users of electronic devices that are communicatively linked in the multi-user communication session. For example, as described herein, content may be shared and presented in the three-dimensional environment such that the content is optionally viewable by and/or interactive to multiple users in the multi-user communication session. As discussed above, the three-dimensional environment optionally includes avatars corresponding to the users of the electronic devices that are in the communication session. In some instances, the presentation of the content in the three-dimensional environment with the avatars corresponding to the users of the electronic devices may cause portions of the content to be blocked or obscured from a viewpoint of one or more users in the multi-user communication session. In some examples, a change in the presentation of the content (e.g., a change in a size of the content) in the three-dimensional environment may similarly produce obstructions and/or other complications relative to a viewpoint of one or more users in the multi-user communication session. Accordingly, in some examples, the positions of the avatars corresponding to the users in the multi-user communication session may be updated based on the type of content that is being presented, as described herein in more detail.
As similarly described above with reference to
In some examples, as previously discussed above with reference to
In some examples, the spatial group of the users in the multi-user communication session selectively changes in accordance with a determination that a number of the users in the multi-user communication session changes. For example, from
In some examples, when the avatar 421 corresponding to the user of the fourth electronic device is displayed in the shared three-dimensional environment, the spatial group of the users in the multi-user communication session is updated to accommodate the user of the fourth electronic device (not shown), however a spatial separation between adjacent users in the multi-user communication session is maintained. For example, as shown in
Alternatively, in some examples, when a new user joins the multi-user communication session, the avatars corresponding to the users in the multi-user communication session may not remain separated from adjacent avatars (and/or viewpoint(s) of the user(s)) by the first distance 431A. For example, as additional users join the multi-user communication session, such as the user of the fourth electronic device represented by the avatar 421 or an additional user, such as a user of a fifth electronic device (not shown), the distance between adjacent avatars and/or viewpoints in the shared three-dimensional environment (e.g., the three-dimensional environments 450A/450B) is decreased to a distance smaller than the first distance 431A in
It should be understood that a similar treatment as the above would be applied to an instance in which a user leaves the multi-user communication session in the examples of
In some examples, the spatial group of the users, and thus the spatial separation between pairs of users, in the multi-user communication session is not updated when a respective user ceases sharing their avatar (e.g., toggles off a setting for sharing their avatar). For example, from
Further, as shown in
In some examples, as mentioned above, the spatial separation between adjacent users in the multi-user communication session is selectively updated when content is shared in the shared three-dimensional environment. For example, in
In some examples, as shown in
In
In some examples, in response to receiving the selection input 472A, the second electronic device 470 displays media player user interface 445 in the three-dimensional environment 450B, as shown in
In some examples, in
In the example of
Alternatively, in some examples, when content of the first type is shared in the multi-user communication session, the avatars corresponding to the users in the multi-user communication session may not remain separated from adjacent avatars (and/or viewpoint(s) of the user(s)) by the first distance 431A. For example, when content of the first type discussed above is displayed in the shared three-dimensional environment, the distance between adjacent avatars and/or viewpoints in the shared three-dimensional environment (e.g., the three-dimensional environments 450A/450B) is decreased to a distance smaller than the first distance 431A in
Alternatively, in some examples, content that is a second type of content, different from the first type of content discussed above, includes content that is above the threshold size discussed above when it is displayed in the three-dimensional environment 450A/450B. For example, in
In some examples, as shown in
In some examples, as mentioned above, the playback user interface 447 is content of the second type, particularly because, for example, the playback user interface 447 has a size that is greater than the threshold size discussed above. For example, width and/or length of the two-dimensional playback user interface 447 is greater than a threshold width and/or length (and/or area), as similarly discussed above. Accordingly, in some examples, when the playback user interface 447 is displayed in the shared three-dimensional environment, the spatial group 440 is updated to accommodate display of the playback user interface 447, which includes updating the spatial separation between pairs of users in the multi-user communication session due to the larger size (e.g., width and/or length) of the playback user interface 447. For example, in
In some examples, the spatial separation discussed above changes based on changes in the size of the playback user interface 447. For example, in
In some examples, as shown in
Additionally or alternatively, in some examples, when a size of the playback user interface 447 (e.g., a true or real size (e.g., width, length, area, volume, etc.) of the playback user interface 447 and/or an aspect ratio of the playback user interface 447) is changed in the shared three-dimensional environment 450A/450B (e.g., in accordance with the input discussed above), one or more visual properties (e.g., including a visual appearance) of the content of the playback user interface 447 is adjusted to account for the transition in size of the playback user interface 447 across the electronic devices in the communication session. For example, as shown in
In some examples, in accordance with a determination that the threshold amount of time discussed above has elapsed without detecting input (or some other indication, such as an indication of input(s) received at an electronic device different from the first electronic device 460) that causes the size (e.g., including the aspect ratio) of the playback user interface 447 to change in the shared three-dimensional environment, the content of the playback user interface 447 is (e.g., gradually) redisplayed and/or is once again made visible in the playback user interface 447. For example, as shown in
It should be understood that, in some examples, alternative forms of inputs may cause the size (e.g., including the aspect ratio) of the playback user interface 447 to change, which optionally causes the content of the playback user interface 447 to be visually adjusted in the manner discussed above while the size of the playback user interface 447 is adjusted. For example, an input (or other indication) that causes the video content of the playback user interface 447 to be presented in an alternate orientation (e.g., from a landscape orientation as shown in
In some examples, when the size of the playback user interface 447 is increased in the shared three-dimensional environment, as represented by the increase in size of the rectangle 447A in
In some examples, as shown in
In some examples, the separation spacing described above is associated with a minimum separation spacing. For example, the minimum separation spacing is a fourth distance (e.g., 431D in
In some examples, the updating of the separations between adjacent avatars within the spatial group 440 according to shared content of the first type and the second type may be similarly applied to the two-dimensional representation 427 discussed previously with reference to
In some examples, while the playback user interface 447 is shared and displayed in the multi-user communication session, if a user of a fifth electronic device (not shown) joins the multi-user communication session without sharing their avatar (e.g., with a setting for sharing their avatar toggled off), as similarly discussed herein above, the shared three-dimensional environment includes a two-dimensional representation corresponding to the user of the fifth electronic device. For example, in
As previously discussed above, when the size of the playback user interface 447 is increased in the shared three-dimensional environment, the separation spacing between adjacent users in the spatial group 440 is decreased accordingly (e.g., proportionally or equally), such as to the third distance 431C shown in
In some examples, while in the multi-user communication session, the first electronic device 460 and the second electronic device 470 display a respective avatar corresponding to a respective user at a respective location within a spatial group based on a location of a two-dimensional representation of the respective user within a communication representation (e.g., canvas) that is displayed in the shared three-dimensional environment. For example, in
In the example of
In
In some examples, in accordance with a determination that a respective user who is currently represented in the shared three-dimensional environment by a two-dimensional representation toggles on their avatar such that the respective user is in a spatial state, a location of the avatar corresponding to the respective user within the spatial group 440 is selected based on a position of the two-dimensional representation in the canvas. For example, in
In some examples, as mentioned above, a location at which the avatar 411 is displayed within the spatial group 440 is selected based on a position of the second representation 428b within the canvas 427 in
Alternatively, in some examples, if the user of the fourth electronic device (not shown) provided input for toggling on their avatar, rather than the user of the fifth electronic device as discussed above, an avatar corresponding to the user of the fourth electronic device would be displayed in the three-dimensional environment 450A/450B rather than the avatar 411 discussed above. Additionally, as similarly discussed above, the avatar corresponding to the user of the fourth electronic device would optionally be displayed at a location within the spatial group 440 based on a position of the first representation 428a within the canvas 427 in
Additionally, in some examples, as shown in
Thus, one advantage of the disclosed method of automatically updating a spatial group of users, which includes changing a spatial separation between adjacent users (e.g., represented by their avatars), in a multi-user communication session based on a type of content that is shared and displayed in a shared three-dimensional environment is that users may be provided with an unobscured viewing experience of the shared content, which also allows for unobstructed interaction with the shared content. As another benefit, automatically updating the spatial group of the users as discussed above helps prevent and/or reduce the need for user input for manually rearranging the shared content and/or positions of users in the shared three-dimensional environment, which helps reduce power consumption of the electronic devices that would otherwise be required to respond to such user corrections. Additionally, automatically updating a spatial group of users, which includes changing a spatial separation between adjacent users, in a multi-user communication session when transitioning between displaying two-dimensional representations and displaying three-dimensional representations (e.g., avatars) of users enables a spatial context of a two-dimensional representation of a respective user to be automatically preserved relative to viewpoints of the other users, thereby maintaining a spatial context of the users within the spatial group overall, which further improves user-device interaction.
As described above, while electronic devices are communicatively linked in a multi-user communication session, displaying shared content in a shared three-dimensional environment causes a spatial group of the users of the electronic devices to be updated based on the content type. Attention is now directed to additional or alternative examples of updating the spatial group of users in a multi-user communication session when shared content is displayed in a three-dimensional environment shared between electronic devices.
As previously discussed herein, in
In some examples, as previously discussed above with reference to
As previously discussed herein, while the user of the first electronic device 560, the user of the second electronic device 570, and the user of the third electronic device (not shown) are in the multi-user communication session, content may be shared and displayed in the shared three-dimensional environment such that the content is viewable by and interactive to the users. In some examples, as shown in
In some examples, as shown in
In some examples, in response to receiving the selection of the option 523A in the user interface object 530, the first electronic device initiates a process to share and display content associated with the user interface object 530 in the shared three-dimensional environment. In some examples, when shared content is displayed in the shared three-dimensional environment, the spatial group 540 is updated such that the avatars 515/517/519 are repositioned in the shared three-dimensional environment based on a location at which the shared content is to be displayed. For example, as shown in
In some examples, when the placement location for the shared content is determined, a reference line 539 is established between the placement location, represented by the square 541, and the center 532 of the spatial group 540, as shown in
In some examples, as shown in
In some examples, the approach above of moving/shifting the avatars about the reference line 539 allows a context of the spatial arrangement of the users to be preserved when the playback user interface 547 is displayed in the shared three-dimensional environment. For example, in
In some examples, if the playback user interface 547 ceases to be shared and/or displayed in the three-dimensional environments 550A/550B (e.g., in response to user input), the avatars 515/517/519 are rearranged in the spatial group 540 to have a conversational arrangement, as similarly shown in
In some examples, the locations of the avatars corresponding to the users (e.g., including the viewpoints of the users) are repositioned in the spatial group 540 when a respective user in the multi-user communication session toggles their avatar off. For example, in
In
In some examples, as shown in
In some examples, when the avatar 515 corresponding to the user of the second electronic device 570 ceases to be displayed in the shared three-dimensional environment, the locations of the avatars corresponding to the users (e.g., including the viewpoints of the users) of the first electronic device 560 and the third electronic device (not shown) are updated (e.g., repositioned) in the spatial group 540 in the multi-user communication session. In some examples, as similarly discussed above, the locations of the avatars (e.g., including the viewpoints of the users), represented by the ovals 517A/519A, are repositioned based on the location at which the avatar corresponding to the user of the second electronic device 570 (e.g., including the viewpoint of the user of the second electronic device 570) occupied in the spatial group 540. For example, as shown in
Accordingly, in some examples, as shown in
Additionally, as shown in
In some examples, if the user of the second electronic device 570 provides input to toggle on the avatar 515 corresponding to the user of the second electronic device 570, when the avatar 515 corresponding to the user of the second electronic device 570 is redisplayed in the three-dimensional environments 550A/550B, the avatars 515/517/519 are rearranged in the spatial group 540 to have a conversational arrangement, as similarly shown in
In some examples, referring back to
In some examples, an arrangement of the avatars relative to a location at which content is shared and displayed in the shared three-dimensional environment is based on a location at which existing content is displayed in the shared three-dimensional environment. For example, in
In
Additionally, in
In
In some examples, in response to receiving the selection of the option 543A in the user interface object 544, the second electronic device 570 initiates a process to share and display content associated with the user interface object 544 in the shared three-dimensional environment. In some examples, as similarly discussed above, when shared content is displayed in the shared three-dimensional environment, the spatial group 540 is updated such that the avatars 515/517 are repositioned in the shared three-dimensional environment based on a location at which the shared content is to be displayed. In some examples, the location at which the shared content is to be displayed corresponds to the location at which existing content is displayed in the shared three-dimensional environment. For example, as shown in
In some examples, as shown in
Further, in some examples, in line with the above, at the first electronic device 560, when the media player user interface 545 is shared and displayed in the three-dimensional environment 550A, the two-dimensional representation 529 remains displayed at the same location relative to the viewpoint of the user of the first electronic device 560. For example, the two-dimensional representation 529 is optionally displayed with a reduced size in the three-dimensional environment 550A and is shifted rightward (or leftward) in the three-dimensional environment 550A when the media player user interface 545 is displayed, but the two-dimensional representation 529 and the media player user interface 545 occupy the same location in the three-dimensional environment 550A as the two-dimensional representation 529 in FIG. 5K relative to the viewpoint of the user of the first electronic device 560 prior to the media player user interface 545 being shared in the multi-user communication session. Additionally, in some examples, at the first electronic device 560, the avatar 515 corresponding to the user of the second electronic device 570 is angled/rotated relative to the viewpoint of the user of the first electronic device 560 to be facing toward a location that is based on a location at which the user interface object 544 was displayed at the second electronic device 570 in
In some examples, the placement location for the shared content is alternatively selected relative to the viewpoint of the user who provided input for sharing the content in the shared three-dimensional environment. In
In
In some examples, in response to receiving the selection of the option 523A in the user interface object 530, the first electronic device 560 initiates a process to share and display content associated with the user interface object 530 in the shared three-dimensional environment. In some examples, when shared content is displayed in the shared three-dimensional environment containing to existing shared content (e.g., no existing canvas), as similarly discussed above, the spatial group 540 is updated such that the avatars 515/517/519 are repositioned in the shared three-dimensional environment based on a location at which the shared content is to be displayed. For example, as shown in
In some examples, when the placement location for the shared content is determined, a reference line 539 is established between the placement location, represented by the square 541, and the location of the user of the first electronic device 560, represented by oval 517A in the spatial group 540, as shown in
Alternatively, in some examples, the arrangement of the users in the spatial group 540 when the content discussed above is shared in the multi-user communication session is determined based on a position of each user relative to a midpoint of the reference line 539. For example, individual lines may be established between the midpoint of the reference line 539 to each of the ovals 515A, representing the user of the second electronic device 570, and 519A, representing the user of the third electronic device (not shown), in the spatial group 540 in
In some examples, as shown in
In some examples, in accordance with a determination that an event occurs for updating the spatial arrangement of the users within the spatial group 540 that is not associated with a user who is currently in a spatial state within the multi-user communication session, the users are rearranged in the shared three-dimensional environment at least in part based on an average detected orientation of the users' respective electronic devices. For example, as mentioned previously above, the electronic devices discussed herein are worn on a head of a particular user during use, such that the orientation of a particular electronic device is determined by an orientation of the head of the user (e.g., a particular degree of rotation along the pitch, yaw, and/or roll directions).
In
In
In some examples, when the new user (e.g., the user of the fourth electronic device (not shown)) joins the multi-user communication session, the new user is in a non-spatial state. For example, as similarly discussed above, when the user of the fourth electronic device (not shown) joins the spatial group 540, the user of the fourth electronic device is represented by a two-dimensional representation in the shared three-dimensional environment rather than an avatar similar to the avatars 515/517/519 in
In some examples, the placement location for the two-dimensional representation of the user of the second electronic device 570 is determined based on an average position of the users in the spatial group 540 and an average orientation of the electronic devices associated with the spatial users in the spatial group 540. As shown in
Alternatively, in some examples, the average orientation of the electronic devices is determined individually by the electronic devices relative to a nominal center of a field of view of each electronic device. For example, rather than averaging vectors corresponding to the orientation of the electronic devices in the manner discussed above, the positions of the users in the spatial group 540 relative to the center of the field of view of each user at each electronic device are determined, and the average orientation is determined based on the offsets of the positions.
In some examples, when the average center 532 and the average orientation are determined in the manners above, the placement location for the two-dimensional representation corresponding to the user of the fourth electronic device (not shown) may then be determined. In some examples, the placement location, represented by square 541, corresponds to a location in the spatial group 540 that is a predetermined distance away from the average center 532 and in the direction of the average orientation of the first electronic device 560, the second electronic device 570, and the third electronic device. For example, as shown in
In some examples, in accordance with a determination that the average direction of the orientations of the first electronic device 560, the second electronic device 570, and/or the third electronic device (not shown) are equal and opposite (e.g., is zero) due to the electronic devices being oriented to face in opposite directions, the determination of the placement location for the two-dimensional representation would be similar as one of the provided approaches above. For example, the placement location for the two-dimensional representation corresponding to the user of the fourth electronic device is selected arbitrarily and/or is selected based on the average center 532 of the spatial group 540 (e.g., irrespective of the orientations of the electronic devices).
In some examples, as similarly discussed above, when the placement location for the two-dimensional representation is determined, a reference line 539 is established between the placement location, represented by the square 541, and the average center 532 of the spatial group 540, as shown in
Accordingly, as outlined above, in some examples, when an event is detected that is associated with a user who is not currently in a spatial state and that causes the spatial arrangement of users in the spatial group 540 to be updated, the users currently in a spatial state within the spatial group 540 are repositioned within the shared three-dimensional environment based on the average position of the users and the average orientation of their respective electronic devices. It should be understood that, in the example shown in
Thus, one advantage of the disclosed method of automatically repositioning users in a spatial group in a multi-user communication session with a directionality that is based on a location of shared content when the shared content is displayed in the shared three-dimensional environment is that a spatial context of an arrangement of the users may be preserved when the shared content is displayed, while also providing an unobscured view of the shared content in the shared three-dimensional environment and providing a visually seamless transition in the movement of the avatars corresponding to the users. As another benefit, automatically repositioning users in the spatial group in the multi-user communication session when a respective user causes their avatar to no longer be displayed in the shared three-dimensional environment helps reduce the need for input for manually repositioning themselves in the spatial group after the avatar corresponding to the respective user is no longer displayed.
As described above, while electronic devices are communicatively linked in a multi-user communication session, displaying shared content in the multi-user communication session causes relative locations of the users of the electronic devices to be updated based on the location of the shared content, including moving the avatars corresponding to the users in a direction relative to the location of the shared content. Attention is now directed to examples pertaining to selectively updating a number of “seats” (e.g., unoccupied spatial openings) within a spatial group of users in a multi-user communication session.
As used herein, a spatial group within the multi-user communication session may be associated with a plurality of seats that determines the spatial arrangement of the spatial group. For example, the spatial group is configured to accommodate a plurality of users (e.g., from two users up to “n” users) and each user of the plurality of users is assigned to (e.g., occupies) a seat of the plurality of seats within the spatial group. In some examples, when a user joins or leaves the multi-user communication session, a number of seats in the spatial group is selectively changed. For example, if a user joins the multi-user communication session, the number of seats in the spatial group is increased by one. On the other hand, if a user leaves the multi-user communication session, the number of seats in the spatial group is not decreased by one; rather, the number of seats in the spatial group is maintained until an event occurs that causes the number of seats to be reset to correspond to a current number of users in the multi-user communication session, as illustrated via the examples below. Accordingly, if a new user joins the multi-user communication session while a seat is unoccupied in the spatial group, the new user will be placed at the unoccupied seat within the spatial group, which results in fewer and/or less perceptible changes in the arrangement of avatars corresponding to the users in the shared three-dimensional environment.
As previously discussed herein, in
In some examples, as similarly discussed above, the spatial group 640 may be associated with a plurality of “seats” (e.g., predetermined spatial openings) in the shared three-dimensional environment that are configured to be occupied by one or more users in the multi-user communication session. For example, the plurality of seats determines the spatial arrangement of the spatial group 640 discussed above. In some examples the plurality of seats in the shared three-dimensional environment may generally be radially arranged around the center 632 of the spatial group 640, where each seat of the plurality of seats is positioned an equal distance from the center 632 and is separated by an adjacent seat by an equal distance, angle, and/or arc length relative to the center 632. In
From
In some examples, as alluded to above, despite the user of the third electronic device (not shown) no longer being in the multi-user communication session, a seat 638 associated with the user of the third electronic device (e.g., previously occupied by the user of the third electronic device) remains established in the spatial group 640. For example, as shown in
In some examples, while the seat 638 is unoccupied in the spatial group 640, if a new user joins the multi-user communication session, the new user is placed at the seat 638 in the spatial group 640 (e.g., an avatar corresponding to the new user is displayed at a location in the shared three-dimensional environment that corresponds to the seat 638 in the spatial group 640). For example, from
In some examples, as shown in
As mentioned above, in some examples, if the spatial group 640 includes an unoccupied seat (e.g., such as seat 638), the seat remains established (e.g., included) in the spatial group 640 until an event occurs that causes the seat to be reset (e.g., cleared out) in the spatial group 640. In some examples, one such event includes the display of shared content in the multi-user communication session. In
In
In some examples, as shown in
In some examples, when the playback user interface 647 is displayed in the three-dimensional environments 650A/650B, the spatial arrangement of the spatial group 640 is updated according to any of the exemplary methods discussed herein above. Additionally, as mentioned above, when the playback user interface 647, represented by rectangle 647A, is displayed in the three-dimensional environments 650A/650B, the spatial group 640 is updated to reset any unoccupied seats in the spatial group 640. Particularly, as shown in
From
In
In some examples, as shown in
In some examples, as similarly discussed above, when the user of the second electronic device 670 is no longer associated with the spatial group 640, the spatial group 640 maintains seat 638 that was previously occupied by the user of the second electronic device 670 (e.g., by the avatar 615). In some examples, as mentioned above, the seat 638 remains established (e.g., included) in the spatial group 640 until an event occurs that causes the seat 638 to be reset (e.g., cleared out). In some examples, because the shared three-dimensional environment includes shared content (e.g., the playback user interface 647) in
In the example of
In some examples, as shown in
Accordingly, as outlined above, while users are associated with a spatial group in a multi-user communication session, a seat belonging to a user who leaves the multi-user communication session will remain open such that a new user who joins the multi-user communication session will automatically occupy the open seat, until an event occurs that causes the open seat to be cleared out in the spatial group. Thus, as one advantage, the disclosed method helps avoid frequent and/or unnecessary shifting of avatars and/or viewpoints of users in a spatial group in the multi-user communication session, which could be distracting and/or otherwise disruptive for the users who are engaging in a shared experience within the multi-user communication session. Another advantage of the disclosed method is that, because the resetting of the number of seats in the spatial group coincides with a transition in the display of shared content in the multi-user communication session, one spatial arrangement update accounts for two transition events, which helps reduce power consumption.
It is understood that the examples shown and described herein are merely exemplary and that additional and/or alternative elements may be provided within the three-dimensional environments for interacting with the illustrative content. It should be understood that the appearance, shape, form and size of each of the various user interface elements and objects shown and described herein are exemplary and that alternative appearances, shapes, forms and/or sizes may be provided. For example, the virtual objects representative of user interfaces (e.g., private application window 330, user interface objects 430, 530 and 630 and/or user interfaces 445, 447, 547, and 647) may be provided in an alternative shape than a rectangular shape, such as a circular shape, triangular shape, etc. In some examples, the various selectable options (e.g., the options 423A, 523A, 623A, and 623B and/or the affordance 651), user interface elements (e.g., user interface elements 520 and/or 623), control elements (e.g., playback controls 456, 556 and/or 656), etc. described herein may be selected verbally via user verbal commands (e.g., “select option” verbal command). Additionally or alternatively, in some examples, the various options, user interface elements, control elements, etc. described herein may be selected and/or manipulated via user input received via one or more separate input devices in communication with the electronic device(s). For example, selection input may be received via physical input devices, such as a mouse, trackpad, keyboard, etc. in communication with the electronic device(s).
In some examples, at 704, while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, the first electronic device receives, via the one or more input devices, a first input corresponding to a request to display shared content in the computer-generated environment. For example, as shown in
In some examples, at 706, in response to receiving the first input, at 708, in accordance with a determination that the shared content is a first type of content, at 710, the first electronic device displays, via the display, a first object corresponding to the shared content in the computer-generated environment. For example, as shown in
In some examples, as shown in
It is understood that process 700 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 700 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
In some examples, at 804, while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, the first electronic device receives, via the one or more input devices, a first input corresponding to a request to display content in the computer-generated environment. For example, as shown in
In some examples, at 810, the first electronic device displays the three-dimensional representation corresponding to the user of the second electronic device at a first updated location and the three-dimensional representation corresponding to the user of the third electronic device at a second updated location, different from the first updated location, in the computer-generated environment relative to the viewpoint, including, at 812, moving the three-dimensional representation of the user of the second electronic device to the first updated location and moving the three-dimensional representation of the user of the third electronic device to the second updated location in a respective direction that is selected based on a location of the first object. For example, at shown in
It is understood that process 800 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 800 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
In some examples, at 904, while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device, the first electronic device receives, via the one or more input devices, a first input corresponding to a request to display shared content in the computer-generated environment. For example, as shown in
As shown in
It is understood that process 800 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 800 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method comprising at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, displaying, via the display, a computer-generated environment including a three-dimensional representation corresponding to a user of the second electronic device; while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device, receiving, via the one or more input devices, a first input corresponding to a request to display shared content in the computer-generated environment; and in response to receiving the first input: in accordance with a determination that the shared content is a first type of content, displaying, via the display, a first object corresponding to the shared content in the computer-generated environment, and updating display of the three-dimensional representation corresponding to the user of the second electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and a viewpoint of the first electronic device are separated by a first distance, and the three-dimensional representation corresponding to the user of the second electronic device and the first object are separated by the first distance; and in accordance with a determination that the shared content is a second type of content, different from the first type of content, displaying, via the display, a second object corresponding to the shared content in the computer-generated environment, and updating display of the three-dimensional representation corresponding to the user of the second electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the viewpoint of the first electronic device are separated by a second distance, and the three-dimensional representation corresponding to the user of the second electronic device and the second object are separated by a third distance, different from the second distance.
Additionally or alternatively, in some examples, the determination that the shared content is the second type of content is in accordance with a determination that the first object corresponding to the shared content is configured to have a size that is greater than a threshold size when the first object is displayed in the computer-generated environment. Additionally or alternatively, in some examples, the determination that the shared content is the first type of content is in accordance with a determination that the second object corresponding to the shared content is configured to have a size that is within a threshold size when the second object is displayed in the computer-generated environment. Additionally or alternatively, in some examples, the determination that the shared content is the first type of content is in accordance with a determination that the second object corresponding to the shared content corresponds to a two-dimensional representation of the user of the second electronic device or a two-dimensional representation of the user of the third electronic device. Additionally or alternatively, in some examples, the first object is a shared application window associated with an application operating on the first electronic device. Additionally or alternatively, in some examples, the second distance is smaller than the third distance. Additionally or alternatively, in some examples, the method further comprises: while displaying the second object corresponding to the shared content in the computer-generated environment in accordance with a determination that the shared content is the second type of content in response to receiving the first input, receiving, via the one or more input devices, a second input corresponding to a request to scale the second object in the computer-generated environment; and in response to receiving the second input, in accordance with a determination that the request is to increase a size of the second object relative to the viewpoint of the first electronic device, increasing the size of the second object in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the second input, and updating display of the three-dimensional representation corresponding to the user of the second electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the viewpoint of the first electronic device are separated by a fourth distance, smaller than the second distance.
Additionally or alternatively, in some examples, prior to receiving the first input: a user of the first electronic device and the user of the second electronic device have a spatial group within the communication session, such that the three-dimensional representation corresponding to the user of the second electronic device is positioned the first distance from the viewpoint of the first electronic device; and the three-dimensional representation corresponding to the user of the second electronic device has a first orientation that is facing toward a center of the spatial group. Additionally or alternatively, in some examples, in response to receiving the first input, in accordance with the determination that the shared content is the second type of content: the user of the first electronic device, the user of the second electronic device, and the second object have a second spatial group, different from the spatial group, within the communication session; and the three-dimensional representation corresponding to the user of the second electronic device has a first updated orientation that is facing toward the second object in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises: while displaying the second object corresponding to the shared content in the computer-generated environment in accordance with a determination that the shared content is the second type of content in response to receiving the first input, receiving, via the one or more input devices, a second input corresponding to a request to increase a size of the second object in the computer-generated environment; and in response to receiving the second input: increasing the size of the second object in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the second input; and in accordance with a determination that the second input causes the size of the second object to be increased above a threshold size in the computer-generated environment, updating display of the three-dimensional representation corresponding to the user of the second electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the viewpoint of the first electronic device are separated by a minimum distance.
Additionally or alternatively, in some examples, the method further comprises: receiving, via the one or more input devices, a third input corresponding to a request to increase the size of the second object further above the threshold size in the computer-generated environment; and in response to receiving the third input, increasing the size of the second object further above the threshold size in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the third input, and maintaining display of the three-dimensional representation corresponding to the user of the second electronic device to be separated from the viewpoint of the first electronic device by the minimum distance. Additionally or alternatively, in some examples, displaying the second object corresponding to the shared content in the computer-generated environment comprises displaying the second object corresponding to the shared content at a first position in the computer-generated environment relative to the viewpoint of the first electronic device. In some examples, the method further comprises: while displaying the second object corresponding to the shared content at the first position in the computer-generated environment, receiving, via the one or more input devices, a second input corresponding to a request to scale the second object in the computer-generated environment; and in response to receiving the second input, in accordance with a determination that the request is to increase a size of the second object relative to the viewpoint of the first electronic device, increasing the size of the second object in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the second input, and updating a position of the second object in the computer-generated environment to be a second position, farther from the first position, in the computer-generated environment relative to the viewpoint.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first object corresponding to the shared content in the computer-generated environment in accordance with the determination that the shared content is the first type of content in response to receiving the first input, detecting an indication that a user of a third electronic device has joined the communication session; and in response to detecting the indication, displaying, via the display, a three-dimensional representation corresponding to the user of the third electronic device in the computer-generated environment; wherein the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device remain separated by the first distance. Additionally or alternatively, in some examples, the method further comprises: while displaying the second object corresponding to the shared content in the computer-generated environment in accordance with the determination that the shared content is the second type of content in response to receiving the first input, detecting an indication of a change in state of the second electronic device; and in response to detecting the indication, replacing display of the three-dimensional representation corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, wherein the two-dimensional representation of the user of the second electronic device is displayed adjacent to the second object in the computer-generated environment, and updating display of the three-dimensional representation of the user of the third electronic device to be positioned at a location in the computer-generated environment that is based on a total of a size of the second object and a size of the two-dimensional representation of the user of the second electronic device. Additionally or alternatively, in some examples, the first electronic device and the second electronic device include a head-mounted display, respectively.
Some examples of the disclosure are directed to a method comprising at a first electronic device in communication with a display, one or more input devices, a second electronic device and a third electronic device: while in a communication session with the second electronic device and the third electronic device, displaying, via the display, a computer-generated environment including a three-dimensional representation corresponding to a user of the second electronic device at a first location and a three-dimensional representation corresponding to a user of the third electronic device at a second location, different from the first location, in the computer-generated environment relative to a viewpoint of the first electronic device; while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, receiving, via the one or more input devices, a first input corresponding to a request to display content in the computer-generated environment; and in response to receiving the first input, in accordance with a determination that the content corresponds to shared content, displaying, via the display, a first object corresponding to the shared content in the computer-generated environment, and displaying the three-dimensional representation corresponding to the user of the second electronic device at a first updated location and the three-dimensional representation corresponding to the user of the third electronic device at a second updated location, different from the first updated location, in the computer-generated environment relative to the viewpoint, including moving the three-dimensional representation of the user of the second electronic device to the first updated location and moving the three-dimensional representation of the user of the third electronic device to the second updated location in a respective direction that is selected based on a location of the first object.
Additionally or alternatively, in some examples, the first object is a shared application window associated with an application operating on the first electronic device. Additionally or alternatively, in some examples, the first updated location and the second updated location are determined relative to a reference line in the computer-generated environment. Additionally or alternatively, in some examples, before receiving the first input, the user of the first electronic device, the user of the second electronic device, and the user of the third electronic device are arranged within a spatial group having a center point, and the reference line extends between the location of the first object in the computer-generated environment and the center point of the spatial group. Additionally or alternatively, in some examples, the center point is determined based on a calculated average of the viewpoint of the first electronic device, the first location, and the second location. Additionally or alternatively, in some examples, the respective direction of movement of the three-dimensional representation corresponding to the user of the second electronic device is clockwise relative to the reference line in the computer-generated environment, and the respective direction of movement of the three-dimensional representation corresponding to the user of the third electronic device is counterclockwise relative to the reference line in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises: before receiving the first input, detecting an indication of a change in state of the second electronic device; and in response to detecting the indication, replacing display of the three-dimensional representation corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, and displaying the three-dimensional representation corresponding to the user of the third electronic device at a third updated location relative to the viewpoint, including moving the three-dimensional representation of the user of the third electronic device to the third updated location in the respective direction that is selected based on a location of the two-dimensional representation of the user of the second electronic device.
Additionally or alternatively, in some examples, the first electronic device, the second electronic device, and the third electronic device include a head-mounted display, respectively. Additionally or alternatively, in some examples, the three-dimensional representation of the user of the second electronic device and the three-dimensional representation of the user of the third electronic device are moved to the first updated location and the second updated location, respectively, in the respective direction with an animation of the movement. Additionally or alternatively, in some examples, the method further comprises: in response to receiving the first input, in accordance with a determination that the content corresponds to private content, displaying, via the display, a second object corresponding to the private content in the computer-generated environment, and maintaining display of the three-dimensional representation corresponding to the user of the second electronic device at the first location and maintaining display of the three-dimensional representation corresponding to the user of the third electronic device at the second location in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device, the three-dimensional representation corresponding to the user of the third electronic device, and the second object, receiving, via the one or more input devices, a second input corresponding to a request to share the private content with the user of the second electronic device and the user of the third electronic device; and in response to receiving the second input, redisplaying the second object as a shared object in the computer-generated environment, and displaying the three-dimensional representation corresponding to the user of the second electronic device at a third updated location and the three-dimensional representation corresponding to the user of the third electronic device at a fourth updated location, different from the third updated location, in the computer-generated environment relative to the viewpoint, including moving the three-dimensional representation of the user of the second electronic device to the third updated location and moving the three-dimensional representation of the user of the third electronic device to the fourth updated location in the respective direction that is selected based on a location of the second object.
Additionally or alternatively, in some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, detecting an indication of a request to display shared content in the computer-generated environment; and in response to detecting the indication, displaying, via the display, a second object corresponding to the shared content in the computer-generated environment, and updating the viewpoint of the first electronic device in the computer-generated environment relative to a location of the second object. Additionally or alternatively, in some examples, the viewpoint of the first electronic device, the first location, and the second location are arranged according to a spatial group in the computer-generated environment. In some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, detecting an indication that the user of the second electronic device is no longer in the communication session; and in response to detecting the indication, ceasing display of the three-dimensional representation corresponding to the user of the second electronic device in the computer-generated environment, and maintaining display of the three-dimensional representation corresponding to the user of the third electronic device at the second location in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the third electronic device, detecting an indication that a user of a fourth electronic device has joined the communication session; and in response to detecting the indication displaying, via the display, a three-dimensional representation corresponding to the user of the fourth electronic device at the first location in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the third electronic device, receiving, via the one or more input devices, a second input corresponding to a request to display shared content in the computer-generated environment; and in response to receiving the second input, displaying, via the display, a respective object corresponding to the shared content in the computer-generated environment, and displaying the three-dimensional representation corresponding to the user of the third electronic device at a third location, different from the first location and the second location, in the computer-generated environment. Additionally or alternatively, in some examples, prior receiving the second input, the second location at which the three-dimensional representation corresponding to the user of the third electronic device is displayed is a first distance from the viewpoint of the first electronic device, and in response to receiving the second input, the third location at which the three-dimensional representation corresponding to the user of the third electronic device is displayed is a second distance, smaller than the first distance, from the viewpoint. Additionally or alternatively, in some examples, the computer-generated environment further includes a respective object corresponding to shared content. In some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the third electronic device and the respective object, receiving, via the one or more input devices, a second input corresponding to a request to cease display of the shared content in the computer-generated environment; and in response to receiving the second input, ceasing display of the respective object in the computer-generated environment, and displaying the three-dimensional representation corresponding to the user of the third electronic device at a third location, different from the first location and the second location, in the computer-generated environment.
Additionally or alternatively, in some examples, the viewpoint of the first electronic device, the first location, and the second location are arranged according to a spatial group in the computer-generated environment. In some examples, the method further comprises: while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, detecting an indication that a user of a fourth electronic device has joined the communication session; and in response to detecting the indication, displaying, via the display, a three-dimensional representation corresponding to the user of the fourth electronic device at a third location in the computer-generated environment, moving the three-dimensional representation corresponding to the user of the second electronic device to a fourth location, different from the first location, in the computer-generated environment, and moving the three-dimensional representation corresponding to the user of the third electronic device to a fifth location, different from the second location, in the computer-generated environment.
Some examples of the disclosure are directed to a method comprising at a first electronic device in communication with a display, one or more input devices, a second electronic device and a third electronic device: while in a communication session with the second electronic device and the third electronic device, displaying, via the display, a computer-generated environment including a three-dimensional representation corresponding to a user of the second electronic device and a three-dimensional representation corresponding to a user of the third electronic device, wherein the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device are separated by a first distance; while displaying the computer-generated environment including the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, receiving, via the one or more input devices, a first input corresponding to a request to display shared content in the computer-generated environment; and in response to receiving the first input: in accordance with a determination that the shared content is a first type of content, displaying, via the display, a first object corresponding to the shared content in the computer-generated environment, and updating display of the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device are separated by a second distance, different from the first distance; and in accordance with a determination that the shared content is a second type of content, different from the first type of content, displaying, via the display, a second object corresponding to the shared content in the computer-generated environment, and maintaining display of the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device to be separated by the first distance.
Additionally or alternatively, in some examples, the determination that the shared content is the first type of content is in accordance with a determination that the first object corresponding to the shared content is configured to have a size that is greater than a threshold size when the first object is displayed in the computer-generated environment. Additionally or alternatively, in some examples, the determination that the shared content is the second type of content is in accordance with a determination that the second object corresponding to the shared content is configured to have a size that is within a threshold size when the second object is displayed in the computer-generated environment. Additionally or alternatively, in some examples, the determination that the shared content is the second type of content is in accordance with a determination that the second object corresponding to the shared content corresponds to a two-dimensional representation of the user of the second electronic device or a two-dimensional representation of the user of the third electronic device. Additionally or alternatively, in some examples, the first object is a shared application window associated with an application operating on the first electronic device. Additionally or alternatively, in some examples, the second distance is smaller than the first distance. Additionally or alternatively, in some examples, the method further comprises: while displaying the first object corresponding to the shared content in the computer-generated environment in accordance with a determination that the shared content is the first type of content in response to receiving the first input, receiving, via the one or more input devices, a second input corresponding to a request to scale the first object in the computer-generated environment; and in response to receiving the second input, in accordance with a determination that the request is to increase a size of the first object relative to a viewpoint of the first electronic device, increasing the size of the first object in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the second input, and updating display of the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device are separated by a third distance, smaller than the second distance.
Additionally or alternatively, in some examples, prior to receiving the first input: a user of the first electronic device, the user of the second electronic device, and the user of the third electronic device have a spatial group within the communication session, such that the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device are positioned the first distance from a viewpoint of the first electronic device; and the three-dimensional representation corresponding to the user of the second electronic device has a first orientation and the three-dimensional representation corresponding to the user of the third electronic device has a second orientation that are facing toward a center of the spatial group. Additionally or alternatively, in some examples, in response to receiving the first input, in accordance with the determination that the shared content is the first type of content: the user of the first electronic device, the user of the second electronic device, the user of the third electronic device, and the first object have a second spatial group, different from the spatial group, within the communication session; and the three-dimensional representation corresponding to the user of the second electronic device has a first updated orientation and the three-dimensional representation corresponding to the user of the third electronic device has a second updated orientation that are facing toward the first object in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first object corresponding to the shared content in the computer-generated environment in accordance with a determination that the shared content is the first type of content in response to receiving the first input, receiving, via the one or more input devices, a second input corresponding to a request to increase a size of the first object in the computer-generated environment; and in response to receiving the second input, increasing the size of the first object in the computer-generated environment relative to a viewpoint of the first electronic device in accordance with the second input, and in accordance with a determination that the second input causes the size of the first object to be increased above a threshold size in the computer-generated environment, updating display of the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device, such that the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device are separated by a minimum distance. Additionally or alternatively, in some examples, the method further comprises: receiving, via the one or more input devices, a third input corresponding to a request to increase the size of the first object further above the threshold size in the computer-generated environment, and in response to receiving the third input, increasing the size of the first object further above the threshold size in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the third input, and maintaining display of the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device to be separated by the minimum distance.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first object corresponding to the shared content at a first position in the computer-generated environment relative to a viewpoint of the first electronic device in accordance with a determination that the shared content is the first type of content in response to receiving the first input, receiving, via the one or more input devices, a second input corresponding to a request to scale the first object in the computer-generated environment; and in response to receiving the second input, in accordance with a determination that the request is to increase a size of the first object relative to the viewpoint of the first electronic device, increasing the size of the first object in the computer-generated environment relative to the viewpoint of the first electronic device in accordance with the second input, and updating a position of the first object in the computer-generated environment to be a second position, farther from the first position, in the computer-generated environment relative to the viewpoint.
Additionally or alternatively, in some examples, the method further comprises: while displaying the second object corresponding to the shared content in the computer-generated environment in accordance with the determination that the shared content is the second type of content in response to receiving the first input, detecting an indication that a user of a fourth electronic device has joined the communication session; and in response to detecting the indication, displaying, via the display, a three-dimensional representation corresponding to the user of the fourth electronic device in the computer-generated environment; wherein the three-dimensional representation corresponding to the user of the second electronic device and the three-dimensional representation corresponding to the user of the third electronic device remain separated by the first distance. Additionally or alternatively, in some examples, the method further comprises: while displaying the first object corresponding to the shared content in the computer-generated environment in accordance with the determination that the shared content is the first type of content in response to receiving the first input, detecting an indication of a change in state of the second electronic device; and in response to detecting the indication, replacing display of the three-dimensional representation corresponding to the user of the second electronic device with a two-dimensional representation of the user of the second electronic device, wherein the two-dimensional representation of the user of the second electronic device is displayed adjacent to the first object in the computer-generated environment, and updating display of the three-dimensional representation of the user of the third electronic device to be positioned at a location in the computer-generated environment that is based on a total of a size of the first object and a size of the two-dimensional representation of the user of the second electronic device. Additionally or alternatively, in some examples, the first electronic device, the second electronic device, and the third electronic device include a head-mounted display, respectively.
Some examples of the disclosure are directed to an electronic device comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising: one or more processors; memory; and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/506,116, filed Jun. 4, 2023, U.S. Provisional Application No. 63/514,327, filed Jul. 18, 2023, U.S. Provisional Application No. 63/578,616, filed Aug. 24, 2023, and U.S. Provisional Application No. 63/587,595, filed Oct. 3, 2023, the contents of which are herein incorporated by reference in their entireties for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
1173824 | Mckee | Feb 1916 | A |
5515488 | Hoppe et al. | May 1996 | A |
5524195 | Clanton et al. | Jun 1996 | A |
5610828 | Kodosky et al. | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn et al. | May 1998 | A |
5758122 | Corda et al. | May 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5877766 | Bates et al. | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy et al. | Nov 1999 | A |
6061060 | Berry et al. | May 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas et al. | Aug 2000 | A |
6154559 | Beardsley | Nov 2000 | A |
6456296 | Cataudella et al. | Sep 2002 | B1 |
6584465 | Zhu et al. | Jun 2003 | B1 |
6756997 | Ward et al. | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7137074 | Newton et al. | Nov 2006 | B1 |
7230629 | Reynolds et al. | Jun 2007 | B2 |
8793620 | Stafford | Jul 2014 | B2 |
8803873 | Yoo et al. | Aug 2014 | B2 |
8947323 | Raffle et al. | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim et al. | Mar 2015 | B2 |
8994718 | Latta et al. | Mar 2015 | B2 |
9007301 | Raffle et al. | Apr 2015 | B1 |
9185062 | Yang et al. | Nov 2015 | B1 |
9201500 | Srinivasan et al. | Dec 2015 | B2 |
9256785 | Qvarfordt | Feb 2016 | B2 |
9400559 | Latta et al. | Jul 2016 | B2 |
9448635 | Macdougall et al. | Sep 2016 | B2 |
9465479 | Cho et al. | Oct 2016 | B2 |
9526127 | Taubman et al. | Dec 2016 | B1 |
9563331 | Poulos et al. | Feb 2017 | B2 |
9575559 | Andrysco | Feb 2017 | B2 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun et al. | Jun 2017 | B2 |
9734402 | Jang et al. | Aug 2017 | B2 |
9778814 | Ambrus et al. | Oct 2017 | B2 |
9851866 | Goossens et al. | Dec 2017 | B2 |
9886087 | Wald et al. | Feb 2018 | B1 |
9933833 | Tu et al. | Apr 2018 | B2 |
9934614 | Ramsby et al. | Apr 2018 | B2 |
10049460 | Romano et al. | Aug 2018 | B2 |
10203764 | Katz et al. | Feb 2019 | B2 |
10307671 | Barney et al. | Jun 2019 | B2 |
10353532 | Holz et al. | Jul 2019 | B1 |
10394320 | George-Svahn et al. | Aug 2019 | B2 |
10534439 | Raffa et al. | Jan 2020 | B2 |
10664048 | Cieplinski et al. | May 2020 | B2 |
10664050 | Alcaide et al. | May 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall et al. | Aug 2020 | B2 |
10768693 | Powderly et al. | Sep 2020 | B2 |
10861242 | Lacey et al. | Dec 2020 | B2 |
10890967 | Stellmach et al. | Jan 2021 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell et al. | Jul 2021 | B1 |
11112875 | Zhou et al. | Sep 2021 | B1 |
11199898 | Blume et al. | Dec 2021 | B2 |
11200742 | Post et al. | Dec 2021 | B1 |
11294472 | Tang et al. | Apr 2022 | B2 |
11294475 | Pinchon et al. | Apr 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11348300 | Zimmermann et al. | May 2022 | B2 |
11461973 | Pinchon | Oct 2022 | B2 |
11573363 | Zou et al. | Feb 2023 | B2 |
11574452 | Berliner et al. | Feb 2023 | B2 |
11720171 | Pastrana Vicente et al. | Aug 2023 | B2 |
11726577 | Katz | Aug 2023 | B2 |
11733824 | Iskandar et al. | Aug 2023 | B2 |
11762457 | Ikkai et al. | Sep 2023 | B1 |
20010047250 | Schuller et al. | Nov 2001 | A1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20030151611 | Turpin et al. | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040243926 | Trenbeath et al. | Dec 2004 | A1 |
20050100210 | Rice et al. | May 2005 | A1 |
20050138572 | Good et al. | Jun 2005 | A1 |
20050144570 | Loverin et al. | Jun 2005 | A1 |
20050144571 | Loverin et al. | Jun 2005 | A1 |
20050198143 | Moody et al. | Sep 2005 | A1 |
20060080702 | Diez | Apr 2006 | A1 |
20060283214 | Donadon et al. | Dec 2006 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata et al. | Mar 2009 | A1 |
20090231356 | Barnes et al. | Sep 2009 | A1 |
20100097375 | Tadaishi et al. | Apr 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100188503 | Tsai et al. | Jul 2010 | A1 |
20110018895 | Buzyn et al. | Jan 2011 | A1 |
20110018896 | Buzyn et al. | Jan 2011 | A1 |
20110216060 | Weising et al. | Sep 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110310001 | Madau et al. | Dec 2011 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120151416 | Bell et al. | Jun 2012 | A1 |
20120170840 | Caruso et al. | Jul 2012 | A1 |
20120218395 | Andersen et al. | Aug 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130229345 | Day et al. | Sep 2013 | A1 |
20130271397 | Hildreth et al. | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130286004 | Mcculloch et al. | Oct 2013 | A1 |
20130326364 | Latta et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342570 | Kinnebrew et al. | Dec 2013 | A1 |
20140002338 | Raffa et al. | Jan 2014 | A1 |
20140028548 | Bychkov et al. | Jan 2014 | A1 |
20140075361 | Reynolds et al. | Mar 2014 | A1 |
20140108942 | Freeman et al. | Apr 2014 | A1 |
20140125584 | Xun et al. | May 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140282272 | Kies et al. | Sep 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane et al. | Nov 2014 | A1 |
20150035822 | Arsan et al. | Feb 2015 | A1 |
20150035832 | Sugden et al. | Feb 2015 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150123890 | Kapur et al. | May 2015 | A1 |
20150177937 | Poletto et al. | Jun 2015 | A1 |
20150187093 | Chu et al. | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150220152 | Tait et al. | Aug 2015 | A1 |
20150227285 | Lee et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150317832 | Ebstyne et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20150332091 | Kim et al. | Nov 2015 | A1 |
20150370323 | Cieplinski et al. | Dec 2015 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160026242 | Burns et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160062636 | Jung et al. | Mar 2016 | A1 |
20160093108 | Mao | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160171304 | Golding et al. | Jun 2016 | A1 |
20160196692 | Kjallstrom et al. | Jul 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano et al. | Sep 2016 | A1 |
20160275702 | Reynolds et al. | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160313890 | Walline et al. | Oct 2016 | A1 |
20160379409 | Gavriliuc et al. | Dec 2016 | A1 |
20170038829 | Lanier et al. | Feb 2017 | A1 |
20170038837 | Faaborg et al. | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier et al. | Feb 2017 | A1 |
20170046872 | Geselowitz et al. | Feb 2017 | A1 |
20170060230 | Faaborg et al. | Mar 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170131964 | Baek et al. | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke et al. | May 2017 | A1 |
20170153866 | Grinberg et al. | Jun 2017 | A1 |
20170206691 | Harrises et al. | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170285737 | Khalid et al. | Oct 2017 | A1 |
20170315715 | Fujita et al. | Nov 2017 | A1 |
20170344223 | Holzer et al. | Nov 2017 | A1 |
20170358141 | Stafford et al. | Dec 2017 | A1 |
20180045963 | Hoover et al. | Feb 2018 | A1 |
20180075658 | Lanier et al. | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia et al. | Apr 2018 | A1 |
20180101223 | Ishihara et al. | Apr 2018 | A1 |
20180114364 | Mcphee et al. | Apr 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey et al. | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180210628 | Mcphee et al. | Jul 2018 | A1 |
20180239144 | Woods et al. | Aug 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov et al. | Nov 2018 | A1 |
20180322701 | Pahud et al. | Nov 2018 | A1 |
20180348861 | Uscinski et al. | Dec 2018 | A1 |
20190034076 | Vinayak et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190080572 | Kim et al. | Mar 2019 | A1 |
20190088149 | Fink et al. | Mar 2019 | A1 |
20190094979 | Hall et al. | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad et al. | May 2019 | A1 |
20190130733 | Hodge | May 2019 | A1 |
20190146128 | Cao et al. | May 2019 | A1 |
20190204906 | Ross et al. | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190258365 | Zurmoehle et al. | Aug 2019 | A1 |
20190310757 | Lee et al. | Oct 2019 | A1 |
20190324529 | Stellmach et al. | Oct 2019 | A1 |
20190339770 | Kurlethimar et al. | Nov 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20190371072 | Lindberg et al. | Dec 2019 | A1 |
20190377487 | Bailey et al. | Dec 2019 | A1 |
20190379765 | Fajt et al. | Dec 2019 | A1 |
20190384406 | Smith et al. | Dec 2019 | A1 |
20200004401 | Hwang et al. | Jan 2020 | A1 |
20200043243 | Bhushan et al. | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev et al. | Mar 2020 | A1 |
20200098140 | Jagnow et al. | Mar 2020 | A1 |
20200098173 | McCall | Mar 2020 | A1 |
20200117213 | Tian et al. | Apr 2020 | A1 |
20200159017 | Lin et al. | May 2020 | A1 |
20200225747 | Bar-Zeev et al. | Jul 2020 | A1 |
20200225830 | Tang et al. | Jul 2020 | A1 |
20200226814 | Tang et al. | Jul 2020 | A1 |
20200356221 | Behzadi et al. | Nov 2020 | A1 |
20200357374 | Verweij et al. | Nov 2020 | A1 |
20200387228 | Ravasz et al. | Dec 2020 | A1 |
20210074062 | Madonna et al. | Mar 2021 | A1 |
20210090337 | Ravasz et al. | Mar 2021 | A1 |
20210096726 | Faulkner et al. | Apr 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210295602 | Scapel et al. | Sep 2021 | A1 |
20210303074 | Vanblon et al. | Sep 2021 | A1 |
20210319617 | Ahn et al. | Oct 2021 | A1 |
20210327140 | Rothkopf et al. | Oct 2021 | A1 |
20210339134 | Knoppert | Nov 2021 | A1 |
20210350564 | Peuhkurinen et al. | Nov 2021 | A1 |
20210375022 | Lee et al. | Dec 2021 | A1 |
20220011855 | Hazra et al. | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220083197 | Rockel et al. | Mar 2022 | A1 |
20220092862 | Faulkner et al. | Mar 2022 | A1 |
20220101593 | Rockel et al. | Mar 2022 | A1 |
20220101612 | Palangie et al. | Mar 2022 | A1 |
20220104910 | Shelton et al. | Apr 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220137705 | Hashimoto et al. | May 2022 | A1 |
20220155909 | Kawashima et al. | May 2022 | A1 |
20220157083 | Jandhyala et al. | May 2022 | A1 |
20220187907 | Lee et al. | Jun 2022 | A1 |
20220191570 | Reid et al. | Jun 2022 | A1 |
20220229524 | Mckenzie et al. | Jul 2022 | A1 |
20220229534 | Terre et al. | Jul 2022 | A1 |
20220245888 | Singh et al. | Aug 2022 | A1 |
20220253149 | Berliner et al. | Aug 2022 | A1 |
20220276720 | Yasui | Sep 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20230004216 | Rodgers et al. | Jan 2023 | A1 |
20230008537 | Henderson et al. | Jan 2023 | A1 |
20230068660 | Brent et al. | Mar 2023 | A1 |
20230069764 | Jonker et al. | Mar 2023 | A1 |
20230074080 | Miller et al. | Mar 2023 | A1 |
20230093979 | Stauber et al. | Mar 2023 | A1 |
20230133579 | Chang et al. | May 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230154122 | Dascola et al. | May 2023 | A1 |
20230163987 | Young et al. | May 2023 | A1 |
20230168788 | Faulkner et al. | Jun 2023 | A1 |
20230185426 | Rockel et al. | Jun 2023 | A1 |
20230186577 | Rockel et al. | Jun 2023 | A1 |
20230244857 | Weiss et al. | Aug 2023 | A1 |
20230273706 | Smith et al. | Aug 2023 | A1 |
20230274504 | Ren et al. | Aug 2023 | A1 |
20230315385 | Akmal et al. | Oct 2023 | A1 |
20230316634 | Chiu et al. | Oct 2023 | A1 |
20230325004 | Burns et al. | Oct 2023 | A1 |
20230350539 | Owen et al. | Nov 2023 | A1 |
20230384907 | Boesel et al. | Nov 2023 | A1 |
20240086031 | Palangie et al. | Mar 2024 | A1 |
20240086032 | Palangie et al. | Mar 2024 | A1 |
20240087256 | Hylak et al. | Mar 2024 | A1 |
20240094863 | Smith et al. | Mar 2024 | A1 |
20240095984 | Ren et al. | Mar 2024 | A1 |
20240103613 | Chawda et al. | Mar 2024 | A1 |
20240103684 | Yu et al. | Mar 2024 | A1 |
20240103707 | Henderson et al. | Mar 2024 | A1 |
20240104836 | Dessero et al. | Mar 2024 | A1 |
20240104877 | Henderson et al. | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110673718 | Jan 2020 | CN |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3503101 | Jun 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H10-51711 | Feb 1998 | JP |
2005-215144 | Aug 2005 | JP |
2012-234550 | Nov 2012 | JP |
2013-196158 | Sep 2013 | JP |
2013-257716 | Dec 2013 | JP |
2014-514652 | Jun 2014 | JP |
2015-515040 | May 2015 | JP |
2015-118332 | Jun 2015 | JP |
2016-194744 | Nov 2016 | JP |
2017-27206 | Feb 2017 | JP |
2018-5516 | Jan 2018 | JP |
2019-169154 | Oct 2019 | JP |
2022-53334 | Apr 2022 | JP |
10-2016-0012139 | Feb 2016 | KR |
10-2019-0100957 | Aug 2019 | KR |
2019142560 | Jul 2019 | WO |
2020066682 | Apr 2020 | WO |
2021202783 | Oct 2021 | WO |
2022046340 | Mar 2022 | WO |
2022055822 | Mar 2022 | WO |
2022066399 | Mar 2022 | WO |
2022164881 | Aug 2022 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
AquaSnap Window Manager: dock, snap, tile, organize [online], Nurgo Software, Available online at: <https://www.nurgo-software.com/products/aquasnap>, [retrieved on Jun. 27, 2023], 5 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/659,147, mailed on Feb. 14, 2024, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, mailed on Oct. 12, 2023, 2 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Dec. 21, 2022, 2 pages. |
Extended European Search Report received for European Patent Application No. 23158818.7, mailed on Jul. 3, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23158929.2, mailed on Jun. 27, 2023, 12 pages. |
Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Mar. 16, 2023, 24 pages. |
Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Oct. 4, 2023, 17 pages. |
Home | Virtual Desktop [online], Virtual Desktop, Available online at: <https://www.vrdesktop.net>, [retrieved on Jun. 29, 2023], 4 pages. |
International Search Report received for PCT Application No. PCT/US2022/076603, mailed on Jan. 9, 2023, 4 pages. |
International Search Report received for PCT Application No. PCT/US2023/017335, mailed on Aug. 22, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/018213, mailed on Jul. 26, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/019458, mailed on Aug. 8, 2023, 7 pages. |
International Search Report received for PCT Application No. PCT/US2023/060943, mailed on Jun. 6, 2023, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/050948, mailed on Mar. 4, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071595, mailed on Mar. 17, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071596, mailed on Apr. 8, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/013208, mailed on Apr. 26, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/071704, mailed on Aug. 26, 2022, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Oct. 6, 2022, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Sep. 29, 2023, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Dec. 11, 2023, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 26, 2023, 29 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Mar. 16, 2023, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,655, mailed on Apr. 20, 2023, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Apr. 7, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Jul. 20, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Aug. 31, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Jun. 6, 2023, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Nov. 30, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on Jan. 26, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Jan. 24, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Sep. 29, 2023, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2024, 10 pages. |
Restriction Requirement received for U.S. Appl. No. 17/932,999, mailed on Oct. 3, 2023, 6 pages. |
Bhowmick, Shimmila, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments”, Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, Nov. 2018, 25 pages. |
Bolt et al., “Two-Handed Gesture in Multi-Modal Natural Dialog”, Uist '92, 5th Annual Symposium on User Interface Software And Technology. Proceedings of the ACM Symposium on User Interface Software And Technology, Monterey, Nov. 15-18, 1992, pp. 7-14. |
Brennan, Dominic, “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared”, [online]. Road to VR, Available online at: <https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/>, [retrieved on Jun, 29, 2023], Jan. 3, 2018, 4 pages. |
Camalich, Sergio, “CSS Buttons with Pseudo-elements”, Available online at: <https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/>, [retrieved on Jul. 12, 2017], Jan. 11, 2012, 8 pages. |
Chatterjee et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions”, ICMI '15, Nov. 9-13, 2015, 8 pages. |
Lin et al., “Towards Naturally Grabbing and Moving Objects in VR”, IS&T International Symposium on Electronic Imaging and The Engineering Reality of Virtual Reality, 2016, 6 pages. |
McGill et al., “Expanding The Bounds of Seated Virtual Workspaces”, University of Glasgow, Available online at: <https://core.ac.uk/download/pdf/323988271.pdf>, [retrieved on Jun. 27, 2023], Jun. 5, 2020, 44 pages. |
Pfeuffer et al., “Gaze + Pinch Interaction in Virtual Reality”, In Proceedings of SUI '17, Brighton, United Kingdom, Oct. 16-17, 2017, pp. 99-108. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, mailed on Apr. 24, 2024, 4 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/465,098, mailed on Mar. 13, 2024, 3 pages. |
European Search Report received for European Patent Application No. 21791153.6, mailed on Mar. 22, 2024, 5 pages. |
Extended European Search Report received for European Patent Application No. 23197572.3, mailed on Feb. 19, 2024, 7 pages. |
Final Office Action received for U.S. Appl. No. 17/580,495, mailed on May 13, 2024, 29 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Feb. 16, 2024, 32 pages. |
Search Report received for Chinese Patent Application No. 202310873465.7, mailed on Feb. 1, 2024, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Nov. 17, 2023, 8 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074257, mailed on Nov. 21, 2023, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074950, mailed on Jan. 3, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074979, mailed on Feb. 26, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Mar. 4, 2024, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,999, mailed on Feb. 23, 2024, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/157,040, mailed on May 2, 2024, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on May 29, 2024, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/305,201, mailed on May 23, 2024, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Apr. 17, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on May 29, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on May 10, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/423,187, mailed on Jun. 5, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Feb. 1, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Jun. 17, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Oct. 30, 2023, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Jun. 20, 2024, 8 pages. |
Number | Date | Country | |
---|---|---|---|
63587595 | Oct 2023 | US | |
63578616 | Aug 2023 | US | |
63514327 | Jul 2023 | US | |
63506116 | Jun 2023 | US |