This relates generally to systems and methods of application-based three-dimensional refinement of objects in multi-user communication sessions.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some examples, the three-dimensional environments are presented by multiple devices communicating in a multi-user communication session. In some examples, an avatar (e.g., a representation) of each user participating in the multi-user communication session (e.g., via the computing devices) is displayed in the three-dimensional environment of the multi-user communication session. In some examples, content can be shared in the three-dimensional environment for viewing and interaction by multiple users participating in the multi-user communication session. In some examples, shared content and/or avatars corresponding to the users participating in the multi-user communication session can be moved within the three-dimensional environment.
Some examples of the disclosure are directed to systems and methods for application-based spatial refinement in a multi-user communication session. In some examples, a first electronic device and a second electronic device may be communicatively linked in a multi-user communication session. In some examples, the first electronic device may present a three-dimensional environment including a first shared object and an avatar corresponding to a user of the second electronic device. In some examples, while the first electronic device is presenting the three-dimensional environment, the first electronic device may receive a first input corresponding to a request to move the first shared object in a first manner in the three-dimensional environment. In some examples, in accordance with a determination that the first shared object is an object of a first type, the first electronic device may move the first shared object and the avatar in the three-dimensional environment in the first manner in accordance with the first input. In some examples, in accordance with a determination that the first shared object is an object of a second type, different from the first type, and the first input is a first type of input, the first electronic device may move the first shared object in the three-dimensional environment in the first manner in accordance with the first input, without moving the avatar.
In some examples, an object of the first type corresponds to an object that has a horizontal orientation in the three-dimensional environment relative to a viewpoint of a user of the first electronic device. In some examples, an object of the second type corresponds to an object that has a vertical orientation in the three-dimensional environment relative to a viewpoint of a user of the first electronic device. In some examples, the first manner of movement directed to the first shared object includes forward or backward movement of the first shared object in the three-dimensional environment relative to the viewpoint of the user of the first electronic device. In some examples, if the first shared object is an object of the second type, the first electronic device scales the first shared object in the three-dimensional environment when the first shared object is moved in the three-dimensional environment in the first manner in accordance with the first input.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for application-based spatial refinement in a multi-user communication session. In some examples, a first electronic device and a second electronic device may be communicatively linked in a multi-user communication session. In some examples, the first electronic device may present a three-dimensional environment including a first shared object and an avatar corresponding to a user of the second electronic device. In some examples, while the first electronic device is presenting the three-dimensional environment, the first electronic device may receive a first input corresponding to a request to move the first shared object in a first manner in the three-dimensional environment. In some examples, in accordance with a determination that the first shared object is an object of a first type, the first electronic device may move the first shared object and the avatar in the three-dimensional environment in the first manner in accordance with the first input. In some examples, in accordance with a determination that the first shared object is an object of a second type, different from the first type, and the first input is a first type of input, the first electronic device may move the first shared object in the three-dimensional environment in the first manner in accordance with the first input, without moving the avatar.
In some examples, an object of the first type corresponds to an object that has a horizontal orientation in the three-dimensional environment relative to a viewpoint of a user of the first electronic device. In some examples, an object of the second type corresponds to an object that has a vertical orientation in the three-dimensional environment relative to a viewpoint of a user of the first electronic device. In some examples, the first manner of movement directed to the first shared object includes forward or backward movement of the first shared object in the three-dimensional environment relative to the viewpoint of the user of the first electronic device. In some examples, if the first shared object is an object of the second type, the first electronic device scales the first shared object in the three-dimensional environment when the first shared object is moved in the three-dimensional environment in the first manner in accordance with the first input.
In some examples, performing spatial refinement in the three-dimensional environment while in the multi-user communication session may include interaction with one or more objects in the three-dimensional environment. For example, initiation of spatial refinement in the three-dimensional environment can include interaction with one or more virtual objects displayed in the three-dimensional environment. In some examples, a user's gaze may be tracked by the electronic device as an input for identifying one or more virtual objects targeted for selection when initiating spatial refinement while in the multi-user communication session. For example, gaze can be used to identify one or more virtual objects targeted for selection using another selection input. In some examples, a virtual object may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
It should be understood that virtual object 114 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or three-dimensional virtual objects) can be included and rendered in a three-dimensional computer-generated environment. For example, the virtual object can represent an application or a user interface displayed in the computer-generated environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the computer-generated environment. In some examples, the virtual object 114 is optionally configured to be interactive and responsive to user input, such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object. In some examples, the virtual object 114 may be displayed in a three-dimensional computer-generated environment within a multi-user communication session (“multi-user communication session,” “communication session”). In some such examples, as described in more detail below, the virtual object 114 may be viewable and/or configured to be interactive and responsive to multiple users and/or user input provided by multiple users, respectively, represented by virtual representations (e.g., avatars, such as avatar 115). For example, the virtual object 114 may be shared among multiple users in the communication session such that input directed to the virtual object 114 is optionally viewable by the multiple users. Additionally, it should be understood, that the 3D environment (or 3D virtual object) described herein may be a representation of a 3D environment (or three-dimensional virtual object) projected or presented at an electronic device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In some examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214A, 214B includes multiple displays. In some examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, etc. In some examples, devices 260 and 270 include touch-sensitive surface(s) 209A and 209B, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214A,214B and touch-sensitive surface(s) 209A, 209B form touch-sensitive display(s) (e.g., a touch screen integrated with devices 260 and 270, respectively, or external to devices 260 and 270, respectively, that is in communication with devices 260 and 270).
Devices 260 and 270 optionally includes image sensor(s) 206A and 206B, respectively. Image sensors(s) 206A/206B optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206A/206B also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206A/206B also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206A/206B also optionally include one or more depth sensors configured to detect the distance of physical objects from device 260/270. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, devices 260 and 270 use CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around devices 260 and 270. In some examples, image sensor(s) 206A/206B include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, device 260/270 uses image sensor(s) 206A/206B to detect the position and orientation of device 260/270 and/or display generation component(s) 214A/214B in the real-world environment. For example, device 260/270 uses image sensor(s) 206A/206B to track the position and orientation of display generation component(s) 214A/214B relative to one or more fixed objects in the real-world environment.
In some examples, device 260/270 includes microphone(s) 213A/213B or other audio sensors. Device 260/270 uses microphone(s) 213A/213B to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213A/213B includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Device 260/270 includes location sensor(s) 204A/204B for detecting a location of device 260/270 and/or display generation component(s) 214A/214B. For example, location sensor(s) 204A/204B can include a GPS receiver that receives data from one or more satellites and allows device 260/270 to determine the device's absolute position in the physical world.
Device 260/270 includes orientation sensor(s) 210A/210B for detecting orientation and/or movement of device 260/270 and/or display generation component(s) 214A/214B. For example, device 260/270 uses orientation sensor(s) 210A/210B to track changes in the position and/or orientation of device 260/270 and/or display generation component(s) 214A/214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210A/210B optionally include one or more gyroscopes and/or one or more accelerometers.
Device 260/270 includes hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B, in some examples. Hand tracking sensor(s) 202A/202B are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214A/214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212A/212B are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214A/214B. In some examples, hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented together with the display generation component(s) 214A/214B. In some examples, the hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented separate from the display generation component(s) 214A/214B.
In some examples, the hand tracking sensor(s) 202A/202B can use image sensor(s) 206A/206B (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensor(s) 206A/206B are positioned relative to the user to define a field of view of the image sensor(s) 206A/206B and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212A/212B includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 260/270 and system 201 are not limited to the components and configuration of
Attention is now directed towards exemplary concurrent displays of a three-dimensional environment on a first electronic device (e.g., corresponding to device 260) and a second electronic device (e.g., corresponding to device 270). As discussed below, the first electronic device may be in communication with the second electronic device in a multi-user communication session. In some examples, an avatar of (e.g., a virtual representation of) a user of the first electronic device may be displayed in the three-dimensional environment at the second electronic device, and an avatar of a user of the second electronic device may be displayed in the three-dimensional environment at the first electronic device. In some examples, content may be shared and interactive within the three-dimensional environment while the first electronic device and the second electronic device are in the multi-user communication session.
As shown in
As mentioned above, in some examples, the first electronic device 360 may enter a multi-user communication session with the second electronic device 370. For example, in the multi-user communication session, the first electronic device 360 and the second electronic device 370 (e.g., via communication circuitry 222A/222B of
In some examples, while two or more electronic devices are communicatively linked in a multi-user communication session, avatars corresponding to the users of the two or more electronic devices are optionally displayed within the shared three-dimensional environments presented at the two or more electronic devices. As shown in
In some examples, the avatars 315/317 are a representation (e.g., a full-body rendering) of each of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are each a representation of a portion (e.g., a rendering of a head, face, head and torso, etc.) of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are a user-personalized, user-selected, and/or user-created representation displayed in the three-dimensional environments 350A/350B that is representative of the users of the electronic devices 370/360. It should be understood that, while the avatars 315/317 illustrated in
In some examples, the presentation of avatars 315/317 as part of the shared three-dimensional environment is optionally accompanied by an audio effect corresponding to a voice of the users of the electronic devices 370/360. For example, the avatar 315 displayed in the three-dimensional environment 350A using the first electronic device 360 is optionally accompanied by an audio effect corresponding to the voice of the user of the second electronic device 370. In some such examples, when the user of the second electronic device 370 speaks, the voice of the user may be detected by the second electronic device 370 (e.g., via the microphone(s) 213B in
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with respective orientations that (e.g., initially, such as prior to spatial refinement, as discussed in more detail below) correspond to and/or are based on orientations of the electronic devices 360/370 in the physical environments surrounding the electronic devices 360/370. For example, as shown in
Additionally, in some examples, while in the multi-user communication session, a viewpoint of the three-dimensional environments 350A/350B and/or a location of the viewpoint of the three-dimensional environments 350A/350B optionally changes in accordance with movement of the electronic devices 360/370 (e.g., by the users of the electronic devices 360/370). For example, while in the communication session, if the first electronic device 360 is moved closer toward the representation of the table 306′ and/or the avatar 315 (e.g., because the user of the first electronic device 360 moved forward in the physical environment surrounding the first electronic device 360), the viewpoint of the user of the first electronic device 360 would change accordingly, such that the representation of the table 306′, the representation of the window 309′ and the avatar 315 appear larger in the field of view of three-dimensional environment 350A.
As mentioned above, while in the multi-user communication session, content can be shared between the first electronic device and the second electronic device, such that the content can be interacted with (e.g., viewed, moved, modified, etc.) by the users of the first electronic device and the second electronic device. In some examples, shared content can be moved within the shared three-dimensional environments presented by the first electronic device and the second electronic device by directly or indirectly interacting with the shared content. In some such examples, however, moving the shared content closer to the viewpoint of one user optionally moves the shared content farther from the viewpoint of the other user in the multi-communication session. Accordingly, it may be advantageous to provide a method for spatial refinement (e.g., movement and/or repositioning of avatars and/or shared objects) in shared three-dimensional environments while multiple devices are in a multi-communication session, which would allow content to be moved at one electronic device without moving the content at the other electronic device. As used herein, performing spatial refinement in the shared three-dimensional environment includes moving a shared object that is selected for movement (e.g., in response to input directed to the shared object) and moving other shared objects and/or avatars corresponding to other users in the multi-user communication session in accordance with the movement of the shared object.
In some examples, the three-dimensional environments shared between the first electronic device 360 and the second electronic device 370 may include one or more shared virtual objects. For example, as shown in
Additionally, in some examples, the position of the avatars 315 and 317 within the three-dimensional environments 350A/350B may reflect/be indicative of the relative distances between the shared virtual objects 314 and 352 and the viewpoints of the users of the electronic devices 360/370. For example, as shown in
In some examples, because the shared virtual objects 314 and 352 are positioned far from the viewpoint of the user of the first electronic device 360, the user of the first electronic device 360 may desire to move the shared virtual objects 314 and 352 closer to the viewpoint of the user of the first electronic device 360. Accordingly, in some examples, it may be advantageous to allow the users of the first electronic device and/or the second electronic device to spatially refine the virtual objects shared between the first electronic device and the second electronic device without moving the virtual objects to undesirable locations within the three-dimensional environments, as showcased above. Example interactions involving spatial refinement of the shared three-dimensional environment (including the shared virtual objects 314 and 352) in the multi-user communication session are discussed below.
As shown in
In some examples, in response to receiving the selection input 372A followed by the movement input 374A, the first electronic device 360 performs spatial refinement in the three-dimensional environment 350A. For example, as described below, the first electronic device 360 moves the shared virtual objects 314 and 352 and the avatar 315 in the three-dimensional environment 350A in accordance with the movement input 374A, rather than just moving the shared virtual objects 314 and 352 in the three-dimensional environment 350A. In some examples, performing spatial refinement enables shared content to be moved within the three-dimensional environment 350A (e.g., closer to the viewpoint of the user of the first electronic device 360), without potentially moving the shared content farther from the user of the second electronic device 370 or to an undesirable location for the user of the second electronic device 370 in the three-dimensional environment 350B.
Additionally, in some examples, in response to receiving the selection input 372A and/or the movement input 374A, the first electronic device 360 optionally displays a planar element (e.g., a disc or disc-shaped element) 337 below the shared objects in the three-dimensional environment 350A (and optionally representations of private content and/or applications of other users). For example, as shown in
In some examples, the movement input directed to shared objects in the shared three-dimensional environment causes the electronic device to perform spatial refinement in the shared three-dimensional environment (including the shared objects) based on a type of the object and/or a direction of the movement input in the shared three-dimensional environment. In some examples, the electronic device performs spatial refinement in the shared three-dimensional environment in response to user input directed to the shared object in accordance with a determination that the shared object is an object of a first type. In some examples, the object type is determined based on an orientation of the shared object in the shared three-dimensional environment. For example, an object of the first type is a shared object that has a horizontal orientation in the shared three-dimensional environment relative to the viewpoint of the user of the electronic device. As shown in
As mentioned above, in some examples, movement input directed to a shared object that is an object of the first type causes the shared object and the avatar 315 corresponding to the user of the second electronic device 370 to move in the three-dimensional environment 350A in accordance with the movement input. For example, as shown in
In some examples, when the first electronic device 360 spatially refines the shared three-dimensional environment (including the shared virtual objects 314 and 352) in response to receiving input (e.g., selection input 372A in
Additionally or alternatively, in some examples, rather than moving the shared virtual object 314, which is an object of the first type as discussed above, to spatially refine the three-dimensional environment 350A (including the shared virtual objects 314 and 352 in the three-dimensional environment 350A), an avatar corresponding to the other user can be moved to produce a same or similar spatial refinement as discussed above. For example, rather than providing input directed to the grabber bar 335 and/or the shared virtual object 314, the user of the first electronic device 360 may provide a selection input (e.g., similar to that discussed above) directed to the avatar 315 corresponding to the user of the second electronic device 370. Subsequently, the user of the first electronic device 360 may provide a drag/movement input (e.g., similar to that discussed above) toward the viewpoint of the user of the first electronic device 360, as similarly shown in
As shown in
As outlined above and as shown in
In some examples, the shared virtual objects 314/352 may alternatively be translated laterally within three-dimensional environment 350A. Additionally, in some examples, the three-dimensional environment 350A may include one or more virtual objects that are not shared with the second electronic device 370 (e.g., private application windows) in the multi-user communication session. As shown in
As shown in
In some examples, in response to receiving the selection input 372B directed to the grabber bar 335, which is associated with the shared virtual object 314, the electronic device initiates spatial refinement in the three-dimensional environment 350A. For example, because the shared virtual object 314 is an object of the first type (e.g., a horizontally oriented object), as previously described above, movement input directed to the shared virtual object 314 causes the shared virtual objects and the avatar 315 corresponding to the user of the second electronic device 370 to move within the three-dimensional environment 350A. Additionally, as similarly described above, in some examples, in response to receiving the selection input 372B and/or the movement input 374B, the first electronic device 360 optionally displays the planar element (e.g., disc) 337 below the objects selected for spatial refinement in three-dimensional environment 350A. For example, as shown in
In some examples, as shown in
As described above with reference to
In some examples, in response to detecting an end of the movement input 374B and/or an end of the selection input 372B (e.g., a deselection input, such as release of the pinch gesture of the hand of the user), the first electronic device 360 optionally ceases moving the avatar 315 and the shared virtual objects 314 and 352 in three-dimensional environment 350A, as shown in
It should be understood that, while forward and lateral movement of the avatars 315/317 and the shared virtual objects 514 and 552 are illustrated and described herein, additional or alternative movements may be provided based on the movement of the hand of the user. For example, the electronic device may move an avatar and shared virtual objects forward and laterally in the three-dimensional environment in accordance with forward and lateral movement of the hand of the user. Additionally, it should be understood that, in some examples, additional or alternative options may be provided for initiating spatial refinement at an electronic device. For example, the user of the electronic device may select a spatial refinement affordance displayed in the three-dimensional environment that allows the user to individually select the objects and/or avatars the user desires to move in the three-dimensional environment. Additionally, in some examples, the electronic device may display a list of options, including an option to initiate spatial refinement, upon selection of an object (e.g., an avatar or a shared object).
Additionally, it should be understood that, while the spatial refinements illustrated in
As outlined above with reference to
As similarly discussed above, in some examples, the three-dimensional environments 450A/450B may include one or more virtual objects that are shared between the first electronic device 460 and the second electronic device 470 in the multi-user communication session. As shown in
As mentioned above, the three-dimensional environment 450A at the first electronic device 460 may include the avatar 415 corresponding to the user of the second electronic device 470, and the three-dimensional environment 450B at the second electronic device 470 may include the avatar 417 corresponding to the user of the first electronic device 460. As alluded to above, the user of the first electronic device 460 and the user of the second electronic device 470 are viewing the video content in the shared application window 432 in the three-dimensional environments 450A/450B in
In some examples, interaction input directed to the shared application window 432 causes the shared application window 432 to be moved within the shared three-dimensional environment. In some examples, the shared application window 432 may be an object of a second type, different from the first type described above with reference to
As shown in
In some examples, in response to receiving the movement input 474A, the second electronic device 470 forgoes performing spatial refinement in the three-dimensional environment 450B (including the shared application window 432 and the avatar 417 corresponding to the user of the first electronic device 460). For example, as mentioned above, the second electronic device 470 performs spatial refinement in the three-dimensional environment 450B depending on a direction of the movement of the shared application window 432, which is an object of the second type (e.g., a vertically oriented object), in the three-dimensional environment 450B. In some examples, in accordance with a determination that the manner of the movement of the shared application window 432 is forward in the three-dimensional environment 450B (e.g., toward the viewpoint of the user of the second electronic device 470), the second electronic device 470 does not perform spatial refinement in the three-dimensional environment 450B. For example, as shown in
Rather, as shown in
In some examples, as similarly described above, when the second electronic device 470 receives the input for moving the shared application window 432 in the three-dimensional environment 450B while the first electronic device 460 and the second electronic device 470 are in the multi-user communication session, the second electronic device 470 transmits an indication of the movement of the shared application window 432. In some examples, when the first electronic device 460 receives the indication of the movement, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A based on the movement of the shared application window 432 in the three-dimensional environment 450B at the second electronic device 470. For example, because the second electronic device 470 did not spatially refine the three-dimensional environment 450B (including the shared application window 432) in response to the movement input 474A, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A instead of moving the avatar 415 corresponding to the user of the second electronic device 470 (e.g., which would have happened had the three-dimensional environment 450B been spatially refined at the second electronic device 470, as similarly described above). In some examples, as shown in
Additionally, as shown in
In some examples, the shared application window 432 may alternatively be moved backward in the three-dimensional environment 450B and farther from the viewpoint of the user of the second electronic device 470. For example, as shown in
As previously discussed above, the shared application window 432 is an object of the second type (e.g., a vertically oriented object). As described above with reference to
In some examples, as shown in
In some examples, as similarly described above, when the second electronic device 470 receives the input for moving the shared application window 432 in the three-dimensional environment 450B while the first electronic device 460 and the second electronic device 470 are in the multi-user communication session, the second electronic device 470 transmits an indication of the movement of the shared application window 432. In some examples, when the first electronic device 460 receives the indication of the movement, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A based on the movement of the shared application window 432 in the three-dimensional environment 450B at the second electronic device 470. For example, because the second electronic device 470 did not spatially refine the three-dimensional environment 450B (including the shared application window 432) in response to the movement input 474B, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A instead of moving the avatar 415 corresponding to the user of the second electronic device 470 (e.g., which would have happened had the three-dimensional environment 450B been spatially refined at the second electronic device 470, as similarly described above). In some examples, as shown in
Additionally, as shown in
Accordingly, as outlined above with reference to
In some examples, as shown in
In some examples, as shown in
As mentioned above, the movement input 474C is optionally rightward radially around the viewpoint of the user of the second electronic device 470 in three-dimensional environment 450B. In some examples, as shown in
In some examples, as shown in
Likewise, because the second electronic device 470 spatially refined the three-dimensional environment 450B (including the avatar 417 and the shared application window 432), the first electronic device 460 rotates the avatar 415 corresponding to the user of the second electronic device 470 within three-dimensional environment 450A based on the movement of the shared application window 432 at the second electronic device 470. For example, as shown in
Additionally, in some examples, when the first electronic device 460 rotates the avatar 415 in the three-dimensional environment 450A, the first electronic device 460 does not move (or rotate) the shared application window 432 in the three-dimensional environment 450A. For example, as shown in
It should be understood that, while the orientations of the faces of the avatars 415/417 are utilized in
Accordingly, as outlined above, in response to receiving an interaction input that includes movement of the shared application window radially in the shared three-dimensional environment about the viewpoint of the user, the electronic device performs spatial refinement in the shared three-dimensional environment. For example, spatially refining the shared application window 432 at the second electronic device 470 in response to the movement input 474C does not cause the avatar 415 to interfere with the user's view (e.g., at viewpoint 418A) of the video content at the first electronic device 460. As shown in
It should be understood that, in some examples, the above-described behavior applies for radial movement leftward (e.g., in the opposite direction than that shown in
In some examples, the shared application window 432 may alternatively be moved laterally within the shared three-dimensional environment while the first electronic device 460 and the second electronic device 470 are in the multi-user communication session. As shown in
In some examples, in response to receiving the movement input 474D, the second electronic device 470 determines a manner of the movement directed to the shared application window 432. As previously discussed above, because the shared application window 432 is optionally an object of the second type (e.g., a vertically oriented object relative to the viewpoint of the user), the second electronic device 470 evaluates a direction of the movement of the shared application window 432 in the three-dimensional environment 450B. As shown in
As mentioned above, the movement input 474D is optionally rightward relative to the viewpoint of the user of the second electronic device 470 in three-dimensional environment 450B. In some examples, as shown in
Accordingly, in some examples, as shown in
Likewise, because the second electronic device 470 spatially refined the three-dimensional environment 450B (including the avatar 417 and the shared application window 432), the first electronic device 460 rotates the avatar 415 corresponding to the user of the second electronic device 470 within three-dimensional environment 450A based on the movement of the shared application window 432 at the second electronic device 470. For example, as shown in
Additionally, in some examples, when the first electronic device 460 rotates the avatar 415 in the three-dimensional environment 450A, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A. For example, as shown in
Accordingly, as outlined above, in response to receiving an interaction input that includes movement of the shared application window laterally in the shared three-dimensional environment about the viewpoint of the user, the electronic device performs spatial refinement in the shared three-dimensional environment. For example, spatially refining the shared application window 432 at the second electronic device 470 in response to the movement input 474D does not cause the avatar 415 to interfere with the user's view (e.g., at viewpoint 418A) of the video content at the first electronic device 460. As shown in
In some examples, the shared application window 432 may alternatively be elevated in the three-dimensional environment 450B relative to the viewpoint of the user of the second electronic device 470. For example, as shown in
As previously discussed above, because the shared application window 432 is an object of the second type (e.g., a vertically oriented object), the second electronic device 470 spatially refines the three-dimensional environment 450B (including the shared application window 432) depending on the manner in which the shared application window 432 is moved in the three-dimensional environment 450B. As discussed above, the second electronic device 470 spatially refines the three-dimensional environment 450B (including the shared application window 432) in response to receiving movement input that corresponds to radial movement of the shared application window 432 (e.g., in
In some examples, as shown in
In some examples, as shown in
In some examples, in response to the inputs received at the second electronic device 470, the first electronic device 460 moves the shared application window 432 in the three-dimensional environment 450A based on the movement input 474E received at the second electronic device 470, without moving the avatar 415 corresponding to the user of the second electronic device 470. For example, as previously discussed above, the second electronic device 470 transmits an indication of the movement input 474E received at the second electronic device 470. As shown in
Additionally, in some examples, the first electronic device angles the shared application window 432 downward in the three-dimensional environment 450A to face toward the viewpoint 418A of the user of the first electronic device 460. For example, as shown in
Accordingly, as outlined above, in response to receiving an interaction input that includes movement of the shared application window upward or downward in the shared three-dimensional environment relative to the viewpoint of the user, the electronic device forgoes performing spatial refinement in the shared three-dimensional environment. For example, spatially refining the shared application window 432 at the second electronic device 470 in response to the movement input 474E may cause the avatar 415 to interfere with the user's view (e.g., at viewpoint 418A) of the video content at the first electronic device 460. As an example, if the avatar 415 were raised/elevated in
In some examples, the shared application window 432 may alternatively be moved vertically in a non-radial manner in the shared three-dimensional environment. For example, in
As described above with reference to
Thus, as described herein with reference to
It is understood that the examples shown and described herein are merely exemplary and that additional and/or alternative elements may be provided within the three-dimensional environment for interacting with the content and/or the avatars. It should be understood that the appearance, shape, form, and size of each of the various user interface elements and objects shown and described herein are exemplary and that alternative appearances, shapes, forms and/or sizes may be provided. For example, the virtual objects representative of application windows (e.g., 324 and 432) may be provided in an alternative shape than a rectangular shape, such as a circular shape, triangular shape, etc. In some examples, the various selectable affordances (e.g., grabber or handlebar affordances 335 and 435) described herein may be selected verbally via user verbal commands (e.g., “select grabber bar” or “select virtual object” verbal command). Additionally or alternatively, in some examples, the various options, user interface elements, control elements, etc. described herein may be selected and/or manipulated via user input received via one or more separate input devices in communication with the electronic device(s). For example, selection input may be received via physical input devices, such as a mouse, trackpad, keyboard, etc. in communication with the electronic device(s).
Additionally, it should be understood that, although the above methods are described with reference to two electronic devices, the above methods optionally apply for two or more electronic devices communicatively linked in a communication session. In some examples, while three, four, five or more electronic devices are communicatively linked in a multi-user communication session, when a user of one electronic device provides movement input at the electronic device, if the movement input is directed to a shared object of the first type (e.g., a horizontally oriented object, such as virtual object 314) in the multi-user communication session, the movement input moves the shared object and the other users' avatars at the electronic device, and if the movement input is directed to a shared object of the second type (e.g., a vertically oriented object, such as application window 432) in the multi-user communication session, the movement input moves the avatars corresponding to the users of the other electronic devices and the shared object at the electronic device depending on the manner (e.g., direction) of the movement input. For example, if the manner of the movement input directed to the shared object of the second type corresponds to forward or backward movement or upward or downward movement while the three, four, five or more electronic devices are communicatively linked in a multi-user communication session, the movement input moves the shared object at the electronic device without moving the avatars corresponding to the users of the other electronic devices.
In some examples, at 504, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, the first electronic device receives, via the one or more input devices, a first input corresponding to a request to move the first shared object in a first manner (e.g., forward movement) in the computer-generated environment. For example, the first electronic device receives a selection input, followed by a movement input directed to the first shared object, such as the movement input 374A directed to the shared virtual object 314 in the three-dimensional environment 450A as shown in
In some examples, at 510, in accordance with a determination that the first shared object is an object of a second type that is different from the first type and the first input is a first type of input, the first electronic device moves the first shared object in the computer-generated environment in the first manner in accordance with the first input without moving the avatar. For example, if the first electronic device determines that the first shared object is an object of the second type, such as a vertically oriented object (e.g., shared application window 432 in
It is understood that process 500 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 500 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
In some examples, at 604, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, the first electronic device receives, from the second electronic device, a first indication corresponding to movement of the first shared object in accordance with first movement input received at the second electronic device. For example, the first electronic device receives an indication that the second electronic device has received movement input directed to the first shared object displayed at the second electronic device, such as movement input 374A directed to the shared virtual object 314 in the three-dimensional environment 350A as shown in
In some examples, at 610, in accordance with a determination that the first shared object is an object of a second type that is different from the first type and the first movement input is a first type of input, the first electronic device moves the first shared object in the computer-generated environment in accordance with the first movement input without moving the avatar. For example, if the first electronic device determines that the first shared object is an object of the second type, such as a vertically oriented object (e.g., shared application window 432 in
It is understood that process 600 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 600 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method comprising, at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, presenting, via the display, a computer-generated environment including an avatar corresponding to a user of the second electronic device and a first shared object; while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, via the one or more input devices, a first input corresponding to a request to move the first shared object in a first manner in the computer-generated environment; and in response to receiving the first input, in accordance with a determination that the first shared object is an object of a first type, moving the avatar and the first shared object in the computer-generated environment in the first manner in accordance with the first input and, in accordance with a determination that the first shared object is an object of a second type that is different from the first type and the first input is a first type of input, moving the first shared object in the computer-generated environment in the first manner in accordance with the first input without moving the avatar.
Additionally or alternatively, in some examples, the first type of input corresponds to one or more of a change in distance between a viewpoint of a user of the first electronic device and the first shared object and vertical movement of the first shared object in the computer-generated environment relative to the viewpoint of the user. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the second type and the first input is a second type of input, different from the first type of input, moving the avatar and the first shared object in the computer-generated environment in the first manner in accordance with the first input. Additionally or alternatively, in some examples, the second type of input corresponds to radial lateral movement relative to a viewpoint of a user of the first electronic device. Additionally or alternatively, in some examples, the first electronic device and the second electronic device each include a head-mounted display. Additionally or alternatively, in some examples, before receiving the first input, the computer-generated environment further includes a first unshared object. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, via the one or more input devices, a second input corresponding to a request to move the first unshared object in the computer-generated environment and, in response to receiving the second input, moving the first unshared object in the computer-generated environment in accordance with the second input without moving the avatar and the first shared object.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the first type, moving the avatar and the first shared object in the computer-generated environment in the first manner in accordance with the first input without moving the first unshared object and, in accordance with a determination that the first shared object is an object of the second type and the first input is the first type of input, moving the first shared object in the computer-generated environment in the first manner in accordance with the first input without moving the avatar and the first unshared object. Additionally or alternatively, in some examples, the object of the first type corresponds to an object that has a horizontal orientation relative to a viewpoint of a user of the first electronic device. Additionally or alternatively, in some examples, the object of the second type corresponds to an object that has a vertical orientation relative to a viewpoint of a user of the first electronic device. Additionally or alternatively, in some examples, the first input includes a pinch gesture provided by a hand of a user of the first electronic device and movement of the hand of the user while holding the pinch gesture with the hand.
Additionally or alternatively, in some examples, moving the first shared object in the first manner corresponds to moving the first shared object toward a viewpoint of the user of the first electronic device or moving the first shared object away from the viewpoint of the user of the first electronic device. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the second type, scaling the first shared object in the computer-generated environment based on the movement of the first shared object in the first manner. Additionally or alternatively, in some examples, before receiving the first input, the first shared object has a first size in the computer-generated environment and scaling the first shared object in the computer-generated environment based on the movement of the first shared object in the first manner includes, in accordance with a determination that the first manner of movement corresponds to the movement of the first shared object toward the viewpoint of the user of the first electronic device, displaying, via the display, the first shared object with a second size, smaller than the first size, in the computer-generated environment.
Additionally or alternatively, in some examples, before receiving the first input, the first shared object has a first size in the computer-generated environment and scaling the first shared object in the computer-generated environment based on the movement of the first shared object in the first manner includes, in accordance with a determination that the first manner of movement corresponds to the movement of the first shared object away from the viewpoint of the user of the first electronic device, displaying, via the display, the first shared object with a second size, larger than the first size, in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the first type, forgoing scaling the first shared object in the computer-generated environment based on the movement of the first shared object in the first manner. Additionally or alternatively, in some examples, before receiving the first input, the first shared object has a first orientation in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, via the one or more input devices, a second input corresponding to a request to move the first shared object laterally in the computer-generated environment relative to a viewpoint of a user of the first electronic device and, in response to receiving the second input, in accordance with the determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, moving the avatar and the first shared object laterally in the computer-generated environment relative to the viewpoint of the user in accordance with the second input.
Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, via the one or more input devices, a second input corresponding to a request to move the first shared object radially laterally in the computer-generated environment relative to a viewpoint of a user of the first electronic device and, in response to receiving the second input, in accordance with the determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, moving the avatar and the first shared object radially laterally in the computer-generated environment relative to the viewpoint of the user in accordance with the second input. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the second input, in accordance with the determination that the first shared object is an object of the first type, displaying, via the display, the first shared object with the first orientation and, in accordance with the determination that the first shared object is an object of the second type, displaying, via the display, the first shared object with a second orientation, different from the first orientation, that faces toward the viewpoint of the user.
Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, via the one or more input devices, a second input corresponding to a request to move the first shared object vertically in the computer-generated environment relative to a viewpoint of a user of the first electronic device and, in response to receiving the second input, in accordance with a determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, moving the first shared object vertically in the computer-generated environment relative to the viewpoint of the user in accordance with the second input without moving the avatar. Additionally or alternatively, in some examples, before receiving the first input, the computer-generated environment further includes a first unshared object. In some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the second type and the first input is a second type of input, moving the avatar and the first shared object in the computer-generated environment in the first manner in accordance with the first input without moving the first unshared object.
Some examples of the disclosure are directed to a method comprising, at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, presenting, via the display, a computer-generated environment including an avatar corresponding to a user of the second electronic device and a first shared object; while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, from the second electronic device, a first indication corresponding to movement of the first shared object in accordance with first movement input received at the second electronic device; and in response to receiving the first indication, in accordance with a determination that the first shared object is an object of a first type, moving the avatar in the computer-generated environment in accordance with the first movement input without moving the first shared object and, in accordance with a determination that the first shared object is an object of a second type that is different from the first type and the first movement input is a first type of input, moving the first shared object in the computer-generated environment in accordance with the first movement input without moving the avatar.
Additionally or alternatively, in some examples, the first type of input corresponds to one or more of a change in distance between a viewpoint of a user of the first electronic device and the first shared object and vertical movement of the first shared object in the computer-generated environment relative to the viewpoint of the user. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first input, in accordance with a determination that the first shared object is an object of the second type and the first movement input is a second type of input, different from the first type of input, moving the avatar in the computer-generated environment in accordance with the first movement input without moving the first shared object. Additionally or alternatively, in some examples, the second type of input corresponds to radial lateral movement relative to a viewpoint of a user of the first electronic device. Additionally or alternatively, in some examples, the first electronic device and the second electronic device each include a head-mounted display. Additionally or alternatively, in some examples, before receiving the first indication, the computer-generated environment further includes a first unshared object of the first electronic device.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the first shared object is an object of the first type, moving the avatar in the computer-generated environment in accordance with the first movement input without moving the first shared object and the first unshared object of the first electronic device and, in accordance with a determination that the first shared object is an object of the second type and the first movement input is the first type of input, moving the first shared object in the computer-generated environment in accordance with the first movement input without moving the avatar and the first unshared object of the first electronic device. Additionally or alternatively, in some examples, the object of the first type corresponds to an object that has a horizontal orientation relative to a viewpoint of a user of the first electronic device. Additionally or alternatively, in some examples, the object of the second type corresponds to an object that has a vertical orientation relative to a viewpoint of a user of the first electronic device.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the first shared object is an object of the second type and that the first movement input corresponds to movement of the first shared object toward or away from a viewpoint of a user of the first electronic device, scaling the first shared object in the computer-generated environment based on the movement of the first shared object. Additionally or alternatively, in some examples, before receiving the first indication, the first shared object has a first size in the computer-generated environment and scaling the first shared object in the computer-generated environment based on the movement of the first shared object includes, in accordance with a determination that the first movement input corresponds to the movement of the first shared object toward the viewpoint of the user of the first electronic device, displaying, via the display, the first shared object with a second size, smaller than the first size, in the computer-generated environment. Additionally or alternatively, in some examples, before receiving the first indication, the first shared object has a first size in the computer-generated environment and scaling the first shared object in the computer-generated environment based on the movement of the first shared object includes, in accordance with a determination that the first movement input corresponds to the movement of the first shared object away from the viewpoint of the user of the first electronic device, displaying, via the display, the first shared object with a second size, larger than the first size, in the computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the first shared object is an object of the first type, forgoing scaling the first shared object in the computer-generated environment based on the movement of the first shared object. Additionally or alternatively, in some examples, before receiving the first indication, the first shared object has a first orientation in the computer-generated environment. Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, from the second electronic device, a second indication corresponding to lateral movement of the first shared object relative to a viewpoint of a user of the first electronic device in accordance with second movement input received at the second electronic device and, in response to receiving the second indication, in accordance with a determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, moving the avatar laterally in the computer-generated environment relative to the viewpoint of the user in accordance with the second movement input without moving the first shared object.
Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, from the second electronic device, a second indication corresponding to radial lateral movement of the first shared object relative to a viewpoint of a user of the first electronic device in accordance with second movement input received at the second electronic device and, in response to receiving the second indication, in accordance with a determination that the first shared object is an object of the first type, moving the avatar radially laterally in the computer-generated environment relative to the viewpoint of the user in accordance with the second movement input without moving the first shared object and, in accordance with a determination that the first shared object is an object of the second type, rotating the avatar in the computer-generated environment relative to the viewpoint of the user based on the second movement input without moving the first shared object. Additionally or alternatively, in some examples, the method further comprises, in response to receiving the second indication, in accordance with the determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, displaying, via the display, the first shared object with the first orientation.
Additionally or alternatively, in some examples, the method further comprises, while displaying the computer-generated environment including the avatar corresponding to the user of the second electronic device and the first shared object, receiving, from the second electronic device, a second indication corresponding to vertical movement of the first shared object relative to a viewpoint of a user of the first electronic device in accordance with second movement input received at the second electronic device and, in response to receiving the second indication, in accordance with a determination that the first shared object is an object of the first type or that the first shared object is an object of the second type, moving the first shared object vertically in the computer-generated environment relative to the viewpoint of the user in accordance with the second movement input without moving the avatar. Additionally or alternatively, in some examples, before receiving the first indication, the computer-generated environment further includes a first unshared object of the second electronic device. In some examples, the method further comprises, in response to receiving the first indication, in accordance with the determination that the first shared object is an object of the first type, moving the avatar and the first unshared object of the second electronic device in the computer-generated environment in accordance with the first movement input without moving the first shared object and, in accordance with the determination that the first shared object is an object of the second type and the first movement input is the first type of input, moving the first shared object in the computer-generated environment in accordance with the first movement input without moving the avatar and the first unshared object of the second electronic device.
Additionally or alternatively, in some examples, before receiving the first indication, the computer-generated environment further includes a first unshared object of the first electronic device. In some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the first shared object is an object of the second type and the first movement input is a second type of input, different from the first type of input, moving the avatar in the computer-generated environment in accordance with the first movement input without moving the first shared object and the first unshared object of the first electronic device.
Some examples of the disclosure are directed to an electronic device comprising: one or more processors; memory; and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising: one or more processors; memory; and means for performing any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising: an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/375,991, filed Sep. 16, 2022, the content of which is herein incorporated by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
1173824 | Mckee | Feb 1916 | A |
5515488 | Hoppe et al. | May 1996 | A |
5524195 | Clanton et al. | Jun 1996 | A |
5610828 | Kodosky et al. | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn et al. | May 1998 | A |
5758122 | Corda et al. | May 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5877766 | Bates et al. | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy et al. | Nov 1999 | A |
6061060 | Berry et al. | May 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas et al. | Aug 2000 | A |
6154559 | Beardsley | Nov 2000 | A |
6323846 | Westerman et al. | Nov 2001 | B1 |
6456296 | Cataudella et al. | Sep 2002 | B1 |
6570557 | Westerman et al. | May 2003 | B1 |
6584465 | Zhu et al. | Jun 2003 | B1 |
6677932 | Westerman | Jan 2004 | B1 |
6756997 | Ward et al. | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7137074 | Newton et al. | Nov 2006 | B1 |
7230629 | Reynolds et al. | Jun 2007 | B2 |
7614008 | Ording | Nov 2009 | B2 |
7633076 | Huppi et al. | Dec 2009 | B2 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7657849 | Chaudhri et al. | Feb 2010 | B2 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7844914 | Andre et al. | Nov 2010 | B2 |
7957762 | Herz et al. | Jun 2011 | B2 |
8006002 | Kalayjian et al. | Aug 2011 | B2 |
8239784 | Hotelling et al. | Aug 2012 | B2 |
8279180 | Hotelling et al. | Oct 2012 | B2 |
8381135 | Hotelling et al. | Feb 2013 | B2 |
8479122 | Hotelling et al. | Jul 2013 | B2 |
8793620 | Stafford | Jul 2014 | B2 |
8803873 | Yoo et al. | Aug 2014 | B2 |
8947323 | Raffle et al. | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim et al. | Mar 2015 | B2 |
8994718 | Latta et al. | Mar 2015 | B2 |
9007301 | Raffle et al. | Apr 2015 | B1 |
9185062 | Yang et al. | Nov 2015 | B1 |
9201500 | Srinivasan et al. | Dec 2015 | B2 |
9256785 | Qvarfordt | Feb 2016 | B2 |
9348458 | Hotelling et al. | May 2016 | B2 |
9400559 | Latta et al. | Jul 2016 | B2 |
9448635 | Macdougall et al. | Sep 2016 | B2 |
9465479 | Cho et al. | Oct 2016 | B2 |
9526127 | Taubman et al. | Dec 2016 | B1 |
9563331 | Poulos et al. | Feb 2017 | B2 |
9575559 | Andrysco | Feb 2017 | B2 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun et al. | Jun 2017 | B2 |
9734402 | Jang et al. | Aug 2017 | B2 |
9778814 | Ambrus et al. | Oct 2017 | B2 |
9851866 | Goossens et al. | Dec 2017 | B2 |
9886087 | Wald et al. | Feb 2018 | B1 |
9933833 | Tu et al. | Apr 2018 | B2 |
9933937 | Lemay et al. | Apr 2018 | B2 |
9934614 | Ramsby et al. | Apr 2018 | B2 |
10049460 | Romano et al. | Aug 2018 | B2 |
10203764 | Katz et al. | Feb 2019 | B2 |
10307671 | Barney et al. | Jun 2019 | B2 |
10353532 | Holz et al. | Jul 2019 | B1 |
10394320 | George-Svahn et al. | Aug 2019 | B2 |
10534439 | Raffa et al. | Jan 2020 | B2 |
10664048 | Cieplinski et al. | May 2020 | B2 |
10664050 | Alcaide et al. | May 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall et al. | Aug 2020 | B2 |
10768693 | Powderly et al. | Sep 2020 | B2 |
10861242 | Lacey et al. | Dec 2020 | B2 |
10890967 | Stellmach et al. | Jan 2021 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell | Jul 2021 | B1 |
11112875 | Zhou et al. | Sep 2021 | B1 |
11199898 | Blume et al. | Dec 2021 | B2 |
11200742 | Post | Dec 2021 | B1 |
11294472 | Tang et al. | Apr 2022 | B2 |
11294475 | Pinchon et al. | Apr 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11348300 | Zimmermann et al. | May 2022 | B2 |
11461973 | Pinchon | Oct 2022 | B2 |
11573363 | Zou et al. | Feb 2023 | B2 |
11574452 | Berliner et al. | Feb 2023 | B2 |
11720171 | Pastrana Vicente et al. | Aug 2023 | B2 |
11726577 | Katz | Aug 2023 | B2 |
11733824 | Iskandar et al. | Aug 2023 | B2 |
11762457 | Ikkai et al. | Sep 2023 | B1 |
20010047250 | Schuller et al. | Nov 2001 | A1 |
20020015024 | Westerman et al. | Feb 2002 | A1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20030151611 | Turpin et al. | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040243926 | Trenbeath et al. | Dec 2004 | A1 |
20050100210 | Rice et al. | May 2005 | A1 |
20050138572 | Good et al. | Jun 2005 | A1 |
20050144570 | Loverin et al. | Jun 2005 | A1 |
20050144571 | Loverin et al. | Jun 2005 | A1 |
20050190059 | Wehrenberg | Sep 2005 | A1 |
20050198143 | Moody et al. | Sep 2005 | A1 |
20060017692 | Wehrenberg et al. | Jan 2006 | A1 |
20060033724 | Chaudhri et al. | Feb 2006 | A1 |
20060080702 | Diez et al. | Apr 2006 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20060283214 | Donadon et al. | Dec 2006 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata et al. | Mar 2009 | A1 |
20090231356 | Barnes et al. | Sep 2009 | A1 |
20100097375 | Tadaishi et al. | Apr 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100188503 | Tsai et al. | Jul 2010 | A1 |
20110018895 | Buzyn et al. | Jan 2011 | A1 |
20110018896 | Buzyn et al. | Jan 2011 | A1 |
20110216060 | Weising et al. | Sep 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110310001 | Madau et al. | Dec 2011 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120151416 | Bell et al. | Jun 2012 | A1 |
20120170840 | Caruso et al. | Jul 2012 | A1 |
20120218395 | Andersen et al. | Aug 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130229345 | Day et al. | Sep 2013 | A1 |
20130271397 | Hildreth et al. | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130286004 | Mcculloch et al. | Oct 2013 | A1 |
20130326364 | Latta et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342570 | Kinnebrew et al. | Dec 2013 | A1 |
20140002338 | Raffa et al. | Jan 2014 | A1 |
20140028548 | Bychkov et al. | Jan 2014 | A1 |
20140075361 | Reynolds et al. | Mar 2014 | A1 |
20140108942 | Freeman et al. | Apr 2014 | A1 |
20140125584 | Xun et al. | May 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140282272 | Kies et al. | Sep 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane et al. | Nov 2014 | A1 |
20150035822 | Arsan et al. | Feb 2015 | A1 |
20150035832 | Sugden et al. | Feb 2015 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150123890 | Kapur et al. | May 2015 | A1 |
20150177937 | Poletto et al. | Jun 2015 | A1 |
20150187093 | Chu et al. | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150220152 | Tait et al. | Aug 2015 | A1 |
20150227285 | Lee et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150317832 | Ebstyne et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20150332091 | Kim et al. | Nov 2015 | A1 |
20150370323 | Cieplinski et al. | Dec 2015 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160026242 | Burns et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160062636 | Jung et al. | Mar 2016 | A1 |
20160093108 | Mao et al. | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160171304 | Golding et al. | Jun 2016 | A1 |
20160196692 | Kjallstrom et al. | Jul 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano et al. | Sep 2016 | A1 |
20160275702 | Reynolds et al. | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160313890 | Walline et al. | Oct 2016 | A1 |
20170038829 | Lanier et al. | Feb 2017 | A1 |
20170038837 | Faaborg et al. | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier et al. | Feb 2017 | A1 |
20170046872 | Geselowitz et al. | Feb 2017 | A1 |
20170060230 | Faaborg et al. | Mar 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170131964 | Baek et al. | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke et al. | May 2017 | A1 |
20170153866 | Grinberg et al. | Jun 2017 | A1 |
20170206691 | Harrises | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170285737 | Khalid et al. | Oct 2017 | A1 |
20170315715 | Fujita et al. | Nov 2017 | A1 |
20170344223 | Holzer et al. | Nov 2017 | A1 |
20170358141 | Stafford et al. | Dec 2017 | A1 |
20180045963 | Hoover et al. | Feb 2018 | A1 |
20180075658 | Lanier et al. | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia et al. | Apr 2018 | A1 |
20180101223 | Ishihara et al. | Apr 2018 | A1 |
20180114364 | Mcphee et al. | Apr 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey et al. | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180210628 | Mcphee et al. | Jul 2018 | A1 |
20180239144 | Woods et al. | Aug 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov et al. | Nov 2018 | A1 |
20180322701 | Pahud et al. | Nov 2018 | A1 |
20180348861 | Uscinski et al. | Dec 2018 | A1 |
20190034076 | Vinayak et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190080572 | Kim et al. | Mar 2019 | A1 |
20190088149 | Fink et al. | Mar 2019 | A1 |
20190094979 | Hall et al. | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad et al. | May 2019 | A1 |
20190130733 | Hodge | May 2019 | A1 |
20190146128 | Cao et al. | May 2019 | A1 |
20190204906 | Ross | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190258365 | Zurmoehle et al. | Aug 2019 | A1 |
20190310757 | Lee et al. | Oct 2019 | A1 |
20190324529 | Stellmach et al. | Oct 2019 | A1 |
20190339770 | Kurlethimar et al. | Nov 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20190371072 | Lindberg et al. | Dec 2019 | A1 |
20190377487 | Bailey et al. | Dec 2019 | A1 |
20190379765 | Fajt et al. | Dec 2019 | A1 |
20190384406 | Smith et al. | Dec 2019 | A1 |
20200004401 | Hwang et al. | Jan 2020 | A1 |
20200043243 | Bhushan et al. | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev et al. | Mar 2020 | A1 |
20200098140 | Jagnow et al. | Mar 2020 | A1 |
20200098173 | Mccall | Mar 2020 | A1 |
20200117213 | Tian et al. | Apr 2020 | A1 |
20200159017 | Lin et al. | May 2020 | A1 |
20200225747 | Bar-Zeev et al. | Jul 2020 | A1 |
20200225830 | Tang et al. | Jul 2020 | A1 |
20200226814 | Tang et al. | Jul 2020 | A1 |
20200356221 | Behzadi | Nov 2020 | A1 |
20200357374 | Verweij et al. | Nov 2020 | A1 |
20200387228 | Ravasz et al. | Dec 2020 | A1 |
20210074062 | Madonna et al. | Mar 2021 | A1 |
20210090337 | Ravasz et al. | Mar 2021 | A1 |
20210096726 | Faulkner et al. | Apr 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210295602 | Scapel et al. | Sep 2021 | A1 |
20210303074 | Vanblon et al. | Sep 2021 | A1 |
20210319617 | Ahn et al. | Oct 2021 | A1 |
20210327140 | Rothkopf et al. | Oct 2021 | A1 |
20210339134 | Knoppert | Nov 2021 | A1 |
20210350564 | Peuhkurinen et al. | Nov 2021 | A1 |
20210375022 | Lee et al. | Dec 2021 | A1 |
20220011855 | Hazra et al. | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220083197 | Rockel et al. | Mar 2022 | A1 |
20220092862 | Faulkner et al. | Mar 2022 | A1 |
20220101593 | Rockel et al. | Mar 2022 | A1 |
20220101612 | Palangie | Mar 2022 | A1 |
20220104910 | Shelton et al. | Apr 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220137705 | Hashimoto et al. | May 2022 | A1 |
20220155909 | Kawashima et al. | May 2022 | A1 |
20220157083 | Jandhyala et al. | May 2022 | A1 |
20220187907 | Lee et al. | Jun 2022 | A1 |
20220191570 | Reid et al. | Jun 2022 | A1 |
20220229524 | Mckenzie et al. | Jul 2022 | A1 |
20220229534 | Terre et al. | Jul 2022 | A1 |
20220245888 | Singh | Aug 2022 | A1 |
20220253149 | Berliner et al. | Aug 2022 | A1 |
20220276720 | Yasui | Sep 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20230004216 | Rodgers et al. | Jan 2023 | A1 |
20230008537 | Henderson et al. | Jan 2023 | A1 |
20230068660 | Brent et al. | Mar 2023 | A1 |
20230069764 | Jonker et al. | Mar 2023 | A1 |
20230074080 | Miller et al. | Mar 2023 | A1 |
20230093979 | Stauber et al. | Mar 2023 | A1 |
20230133579 | Chang et al. | May 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230154122 | Dascola et al. | May 2023 | A1 |
20230163987 | Young et al. | May 2023 | A1 |
20230168788 | Faulkner et al. | Jun 2023 | A1 |
20230185426 | Rockel et al. | Jun 2023 | A1 |
20230186577 | Rockel et al. | Jun 2023 | A1 |
20230244857 | Weiss et al. | Aug 2023 | A1 |
20230273706 | Smith | Aug 2023 | A1 |
20230274504 | Ren | Aug 2023 | A1 |
20230315385 | Akmal et al. | Oct 2023 | A1 |
20230316634 | Chiu et al. | Oct 2023 | A1 |
20230325004 | Burns et al. | Oct 2023 | A1 |
20230350539 | Owen et al. | Nov 2023 | A1 |
20230384907 | Boesel | Nov 2023 | A1 |
20240086031 | Palangie et al. | Mar 2024 | A1 |
20240086032 | Palangie et al. | Mar 2024 | A1 |
20240087256 | Hylak et al. | Mar 2024 | A1 |
20240095984 | Ren et al. | Mar 2024 | A1 |
20240103613 | Chawda et al. | Mar 2024 | A1 |
20240103684 | Yu et al. | Mar 2024 | A1 |
20240103707 | Henderson et al. | Mar 2024 | A1 |
20240104836 | Dessero et al. | Mar 2024 | A1 |
20240104877 | Henderson et al. | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110673718 | Jan 2020 | CN |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3503101 | Jun 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H10-51711 | Feb 1998 | JP |
2005-215144 | Aug 2005 | JP |
2012-234550 | Nov 2012 | JP |
2013-196158 | Sep 2013 | JP |
2013-257716 | Dec 2013 | JP |
2014-514652 | Jun 2014 | JP |
2015-515040 | May 2015 | JP |
2015-118332 | Jun 2015 | JP |
2016-194744 | Nov 2016 | JP |
2017-27206 | Feb 2017 | JP |
2018-005516 | Jan 2018 | JP |
2019-169154 | Oct 2019 | JP |
2022-53334 | Apr 2022 | JP |
10-2016-0012139 | Feb 2016 | KR |
10-2019-0100957 | Aug 2019 | KR |
2013169849 | Nov 2013 | WO |
2014105276 | Jul 2014 | WO |
2019142560 | Jul 2019 | WO |
2020066682 | Apr 2020 | WO |
2021202783 | Oct 2021 | WO |
2022046340 | Mar 2022 | WO |
2022055822 | Mar 2022 | WO |
2022066399 | Mar 2022 | WO |
2022164881 | Aug 2022 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
AquaSnap Window Manager: dock, snap, tile, organize [online], Nurgo Software, Available online at: <https://www.nurgo-software.com/products/aquasnap>, [retrieved on Jun. 27, 2023], 5 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Dec. 21, 2022, 2 pages. |
Extended European Search Report received for European Patent Application No. 23158818.7, mailed on Jul. 3, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23158929.2, mailed on Jun. 27, 2023, 12 pages. |
Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Mar. 16, 2023, 24 pages. |
Home | Virtual Desktop [online], Virtual Desktop, Available online at: <https://www.vrdesktop.net>, [retrieved on Jun. 29, 2023], 4 pages. |
International Search Report received for PCT Application No. PCT/US2022/076603, mailed on Jan. 9, 2023, 4 pages. |
International Search Report received for PCT Application No. PCT/US2023/017335, mailed on Aug. 22, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/018213, mailed on Jul. 26, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/019458, mailed on Aug. 8, 2023, 7 pages. |
International Search Report received for PCT Application No. PCT/US2023/060943, mailed on Jun. 6, 2023, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/049131, mailed on Dec. 21, 2021, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/050948, mailed on Mar. 4, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/065240, mailed on May 23, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071595, mailed on Mar. 17, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/013208, mailed on Apr. 26, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/065242, mailed on Apr. 4, 2022, 3 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Oct. 6, 2022, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Sep. 29, 2023, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Dec. 11, 2023, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 26, 2023, 29 pages. |
Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Aug. 31, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Jun. 6, 2023, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Nov. 30, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2024, 10 pages. |
Restriction Requirement received for U.S. Appl. No. 17/932,999, mailed on Oct. 3, 2023, 6 pages. |
Bhowmick, Shimmila, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments”, Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, Nov. 2018, 25 pages. |
Bolt et al., “Two-Handed Gesture in Multi-Modal Natural Dialog”, Uist '92, 5th Annual Symposium on User Interface Software And Technology. Proceedings Of the ACM Symposium on User Interface Software And Technology, Monterey, Nov. 15-18, 1992, pp. 7-14. |
Brennan, Dominic, “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared”, [online]. Road to VR, Available online at: <https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/>, [retrieved on Jun. 29, 2023], Jan. 3, 2018, 4 pages. |
Camalich, Sergio, “Css Buttons with Pseudo-elements”, Available online at: <https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/>, [retrieved on Jul. 12, 2017], Jan. 11, 2012, 8 pages. |
Chatterjee et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions”, ICMI '15, Nov. 9-13, 2015, 8 pages. |
Lin et al., “Towards Naturally Grabbing and Moving Objects in VR”, IS&T International Symposium on Electronic Imaging and The Engineering Reality of Virtual Reality, 2016, 6 pages. |
McGill et al., “Expanding The Bounds Of Seated Virtual Workspaces”, University of Glasgow, Available online at: <https://core.ac.uk/download/pdf/323988271.pdf>, [retrieved on 2023-06-27], Jun. 5, 2020, 44 pages. |
Pfeuffer et al., “Gaze + Pinch Interaction in Virtual Reality”, In Proceedings of SUI '17, Brighton, United Kingdom, Oct. 16-17, 2017, pp. 99-108. |
Extended European Search Report received for European Patent Application No. 23197572.3, mailed on Feb. 19, 2024, 7 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, mailed on Apr. 24, 2024, 4 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/659,147, mailed on Feb. 14, 2024, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, mailed on Oct. 12, 2023, 2 pages. |
European Search Report received for European Patent Application No. 21791153.6, mailed on Mar. 22, 2024, 5 pages. |
Final Office Action received for U.S. Appl. No. 17/580,495, mailed on May 13, 2024, 29 pages. |
Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Oct. 4, 2023, 17 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Feb. 16, 2024, 32 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071596, mailed on Apr. 8, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/071704, mailed on Aug. 26, 2022, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074257, mailed on Nov. 21, 2023, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074950, mailed on Jan. 3, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074979, mailed on Feb. 26, 2024, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300., mailed on May 29, 2024, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Mar. 16, 2023, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,655, mailed on Apr. 20, 2023, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,999, mailed on Feb. 23, 2024, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/157,040, mailed on May 2, 2024, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/305,201, mailed on May 23, 2024, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Apr. 17, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Apr. 7, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Jul. 20, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on Jan. 26, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on May 29, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Jan. 24, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Sep. 29, 2023, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on May 10, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Apr. 11, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Feb. 1, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Oct. 30, 2023, 11 pages. |
Search Report received for Chinese Patent Application No. 202310873465.7, mailed on Feb. 1, 2024, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Notice of Allowance received for U.S. Appl. No. 18/423,187, mailed on Jun. 5, 2024, 9 pages. |
Number | Date | Country | |
---|---|---|---|
20240094863 A1 | Mar 2024 | US |
Number | Date | Country | |
---|---|---|---|
63375991 | Sep 2022 | US |