This relates generally to systems and methods of three-dimensional immersive applications in multi-user communication sessions.
Some computer graphical environments provide two-dimensional and/or three-dimensional environments where at least some objects displayed for a user's viewing are virtual and generated by a computer. In some examples, the three-dimensional environments are presented by multiple devices communicating in a multi-user communication session. In some examples, an avatar (e.g., a representation) of each user participating in the multi-user communication session (e.g., via the computing devices) is displayed in the three-dimensional environment of the multi-user communication session. In some examples, content can be shared in the three-dimensional environment for viewing and interaction by multiple users participating in the multi-user communication session.
Some examples of the disclosure are directed to systems and methods for sharing and presenting content in a three-dimensional environment that includes one or more avatars corresponding to one or more users of one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device displays an avatar corresponding to a user of the second electronic device in the three-dimensional environment, and the second electronic device displays an avatar corresponding to a user of the first electronic device in the three-dimensional environment. In some examples, an audio corresponding to a voice of the user of the first electronic device and the second electronic device, respectively, is presented with the avatar in the multi-user communication session. In some examples, the first electronic device and the second electronic device may share and present content in the three-dimensional environment. In some examples, depending on a type of content shared in the three-dimensional environment, the first electronic device and the second electronic device selectively maintain display of the avatar when presenting the content in the three-dimensional environment.
In some examples, when content of a first type (e.g., immersive content corresponding to a three-dimensional scene/environment) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in the three-dimensional environment. In some examples, when content of a second type (e.g., immersive content corresponding to a three-dimensional representation of video) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed when presenting the content in the three-dimensional environment. In some such examples, audio corresponding to the voices of the users of the first electronic device and the second electronic device remain presented when the avatars are no longer displayed. In some examples, when content of a third type (e.g., two-dimensional content corresponding to a two-dimensional representation of a video, image, or other content) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in a full-screen mode in the three-dimensional environment. In some examples, when content of a fourth type (e.g., two-dimensional content displayed in a virtual object corresponding to an application window) is shared between the first electronic device and the second electronic device, avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in the virtual object in the three-dimensional environment.
In some examples, while the first electronic device and the second electronic device are in the multi-user communication session, when content is presented in the three-dimensional environment at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device selectively remain displayed in the three-dimensional environment depending on the type of content being presented. In some examples, when content of the first type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the second type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the third type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the fourth type is presented at one electronic device but not the other electronic device, avatars corresponding to the users of the first electronic device and the second electronic device remain displayed in the three-dimensional environment while in the multi-user communication session.
The full descriptions of these examples are provided in the Drawings and the Detailed Description, and it is understood that this Summary does not limit the scope of the disclosure in any way.
For improved understanding of the various examples described herein, reference should be made to the Detailed Description below along with the following drawings. Like reference numerals often refer to corresponding parts throughout the drawings.
Some examples of the disclosure are directed to systems and methods for sharing and presenting content in a three-dimensional environment that includes one or more avatars corresponding to one or more users of one or more electronic devices in a multi-user communication session. In some examples, a first electronic device and a second electronic device are communicatively linked in a multi-user communication session, wherein the first electronic device and the second electronic device are configured to display a three-dimensional environment, respectively. In some examples, the first electronic device displays an avatar corresponding to a user of the second electronic device in the three-dimensional environment, and the second electronic device displays an avatar corresponding to a user of the first electronic device in the three-dimensional environment. In some examples, an audio corresponding to a voice of the user of the first electronic device and the second electronic device, respectively, is presented with the avatar in the multi-user communication session. In some examples, the first electronic device and the second electronic device may share and present content in the three-dimensional environment. In some examples, depending on a type of content shared in the three-dimensional environment, the first electronic device and the second electronic device selectively maintain display of the avatar when presenting the content in the three-dimensional environment.
In some examples, when content of a first type (e.g., immersive content corresponding to a three-dimensional scene/environment) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in the three-dimensional environment. In some examples, when content of a second type (e.g., immersive content corresponding to a three-dimensional representation of video) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed when presenting the content in the three-dimensional environment. In some such examples, audio corresponding to the voices of the users of the first electronic device and the second electronic device remain presented when the avatars are no longer displayed. In some examples, when content of a third type (e.g., two-dimensional content corresponding to a two-dimensional representation of a video, image, or other content) is shared between the first electronic device and the second electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in a full-screen mode in the three-dimensional environment. In some examples, when content of a fourth type (e.g., two-dimensional content displayed in a virtual object corresponding to an application window) is shared between the first electronic device and the second electronic device, avatars corresponding to the users of the first electronic device and the second electronic device remain displayed when presenting the content in the virtual object in the three-dimensional environment.
In some examples, while the first electronic device and the second electronic device are in the multi-user communication session, when content is presented in the three-dimensional environment at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device selectively remain displayed in the three-dimensional environment depending on the type of content being presented. In some examples, when content of the first type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the second type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the third type is presented at one electronic device but not the other electronic device, the avatars corresponding to the users of the first electronic device and the second electronic device cease being displayed in the three-dimensional environment while in the multi-user communication session. In some examples, when content of the fourth type is presented at one electronic device but not the other electronic device, avatars corresponding to the users of the first electronic device and the second electronic device remain displayed in the three-dimensional environment while in the multi-user communication session.
In some examples, sharing the content in the three-dimensional environment while in the multi-user communication session may include interaction with one or more user interface elements. In some examples, a user's gaze may be tracked by the electronic device as an input for targeting a selectable option/affordance within a respective user interface element when sharing the content in the three-dimensional environment. For example, gaze can be used to identify one or more options/affordances targeted for selection using another selection input. In some examples, a respective option/affordance may be selected using hand-tracking input detected via an input device in communication with the electronic device. In some examples, objects displayed in the three-dimensional environment may be moved and/or reoriented in the three-dimensional environment in accordance with movement input detected via the input device.
It should be understood that virtual object 110 is a representative virtual object and one or more different virtual objects (e.g., of various dimensionality such as two-dimensional or three-dimensional virtual objects) can be included and rendered in a three-dimensional computer-generated environment. For example, the virtual object can represent an application or a user interface displayed in the computer-generated environment. In some examples, the virtual object can represent content corresponding to the application and/or displayed via the user interface in the computer-generated environment. In some examples, the virtual object 110 is optionally configured to be interactive and responsive to user input, such that a user may virtually touch, tap, move, rotate, or otherwise interact with, the virtual object. In some examples, the virtual object 110 may be displayed in a three-dimensional computer-generated environment within a multi-user communication session (“multi-user communication session,” “communication session”). In some such examples, as described in more detail below, the virtual object 110 may be viewable and/or configured to be interactive and responsive to multiple users and/or user input provided by multiple users, respectively. Additionally, it should be understood, that the 3D environment (or 3D virtual object) described herein may be a representation of a 3D environment (or three-dimensional virtual object) projected or presented at an electronic device.
In the discussion that follows, an electronic device that is in communication with a display generation component and one or more input devices is described. It should be understood that the electronic device optionally is in communication with one or more other physical user-interface devices, such as touch-sensitive surface, a physical keyboard, a mouse, a joystick, a hand tracking device, an eye tracking device, a stylus, etc. Further, as described above, it should be understood that the described electronic device, display and touch-sensitive surface are optionally distributed amongst two or more devices. Therefore, as used in this disclosure, information displayed on the electronic device or by the electronic device is optionally used to describe information outputted by the electronic device for display on a separate display device (touch-sensitive or not). Similarly, as used in this disclosure, input received on the electronic device (e.g., touch input received on a touch-sensitive surface of the electronic device, or touch input received on the surface of a stylus) is optionally used to describe input received on a separate input device, from which the electronic device receives input information.
The device typically supports a variety of applications, such as one or more of the following: a drawing application, a presentation application, a word processing application, a website creation application, a disk authoring application, a spreadsheet application, a gaming application, a telephone application, a video conferencing application, an e-mail application, an instant messaging application, a workout support application, a photo management application, a digital camera application, a digital video camera application, a web browsing application, a digital music player application, a television channel browsing application, and/or a digital video player application.
As illustrated in
Communication circuitry 222A, 222B optionally includes circuitry for communicating with electronic devices, networks, such as the Internet, intranets, a wired network and/or a wireless network, cellular networks, and wireless local area networks (LANs). Communication circuitry 222A, 222B optionally includes circuitry for communicating using near-field communication (NFC) and/or short-range communication, such as Bluetooth®.
Processor(s) 218A, 218B include one or more general processors, one or more graphics processors, and/or one or more digital signal processors. In some examples, memory 220A, 220B is a non-transitory computer-readable storage medium (e.g., flash memory, random access memory, or other volatile or non-volatile memory or storage) that stores computer-readable instructions configured to be executed by processor(s) 218A, 218B to perform the techniques, processes, and/or methods described below. In some examples, memory 220A, 220B can include more than one non-transitory computer-readable storage medium. A non-transitory computer-readable storage medium can be any medium (e.g., excluding a signal) that can tangibly contain or store computer-executable instructions for use by or in connection with the instruction execution system, apparatus, or device. In some examples, the storage medium is a transitory computer-readable storage medium. In some examples, the storage medium is a non-transitory computer-readable storage medium. The non-transitory computer-readable storage medium can include, but is not limited to, magnetic, optical, and/or semiconductor storages. Examples of such storage include magnetic disks, optical discs based on CD, DVD, or Blu-ray technologies, as well as persistent solid-state memory such as flash, solid-state drives, and the like.
In some examples, display generation component(s) 214A, 214B include a single display (e.g., a liquid-crystal display (LCD), organic light-emitting diode (OLED), or other types of display). In some examples, display generation component(s) 214A, 214B includes multiple displays. In some examples, display generation component(s) 214A, 214B can include a display with touch capability (e.g., a touch screen), a projector, a holographic projector, a retinal projector, etc. In some examples, devices 260 and 270 include touch-sensitive surface(s) 209A and 209B, respectively, for receiving user inputs, such as tap inputs and swipe inputs or other gestures. In some examples, display generation component(s) 214A,214B and touch-sensitive surface(s) 209A, 209B form touch-sensitive display(s) (e.g., a touch screen integrated with devices 260 and 270, respectively, or external to devices 260 and 270, respectively, that is in communication with devices 260 and 270).
Devices 260 and 270 optionally includes image sensor(s) 206A and 206B, respectively. Image sensors(s) 206A/206B optionally include one or more visible light image sensors, such as charged coupled device (CCD) sensors, and/or complementary metal-oxide-semiconductor (CMOS) sensors operable to obtain images of physical objects from the real-world environment. Image sensor(s) 206A/206B also optionally include one or more infrared (IR) sensors, such as a passive or an active IR sensor, for detecting infrared light from the real-world environment. For example, an active IR sensor includes an IR emitter for emitting infrared light into the real-world environment. Image sensor(s) 206A/206B also optionally include one or more cameras configured to capture movement of physical objects in the real-world environment. Image sensor(s) 206A/206B also optionally include one or more depth sensors configured to detect the distance of physical objects from device 260/270. In some examples, information from one or more depth sensors can allow the device to identify and differentiate objects in the real-world environment from other objects in the real-world environment. In some examples, one or more depth sensors can allow the device to determine the texture and/or topography of objects in the real-world environment.
In some examples, devices 260 and 270 use CCD sensors, event cameras, and depth sensors in combination to detect the physical environment around devices 260 and 270. In some examples, image sensor(s) 206A/206B include a first image sensor and a second image sensor. The first image sensor and the second image sensor work in tandem and are optionally configured to capture different information of physical objects in the real-world environment. In some examples, the first image sensor is a visible light image sensor and the second image sensor is a depth sensor. In some examples, device 260/270 uses image sensor(s) 206A/206B to detect the position and orientation of device 260/270 and/or display generation component(s) 214A/214B in the real-world environment. For example, device 260/270 uses image sensor(s) 206A/206B to track the position and orientation of display generation component(s) 214A/214B relative to one or more fixed objects in the real-world environment.
In some examples, device 260/270 includes microphone(s) 213A/213B or other audio sensors. Device 260/270 uses microphone(s) 213A/213B to detect sound from the user and/or the real-world environment of the user. In some examples, microphone(s) 213A/213B includes an array of microphones (a plurality of microphones) that optionally operate in tandem, such as to identify ambient noise or to locate the source of sound in space of the real-world environment.
Device 260/270 includes location sensor(s) 204A/204B for detecting a location of device 260/270 and/or display generation component(s) 214A/214B. For example, location sensor(s) 204A/204B can include a GPS receiver that receives data from one or more satellites and allows device 260/270 to determine the device's absolute position in the physical world.
Device 260/270 includes orientation sensor(s) 210A/210B for detecting orientation and/or movement of device 260/270 and/or display generation component(s) 214A/214B. For example, device 260/270 uses orientation sensor(s) 210A/210B to track changes in the position and/or orientation of device 260/270 and/or display generation component(s) 214A/214B, such as with respect to physical objects in the real-world environment. Orientation sensor(s) 210A/210B optionally include one or more gyroscopes and/or one or more accelerometers.
Device 260/270 includes hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B, in some examples. Hand tracking sensor(s) 202A/202B are configured to track the position/location of one or more portions of the user's hands, and/or motions of one or more portions of the user's hands with respect to the extended reality environment, relative to the display generation component(s) 214A/214B, and/or relative to another defined coordinate system. Eye tracking sensor(s) 212A/212B are configured to track the position and movement of a user's gaze (eyes, face, or head, more generally) with respect to the real-world or extended reality environment and/or relative to the display generation component(s) 214A/214B. In some examples, hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented together with the display generation component(s) 214A/214B. In some examples, the hand tracking sensor(s) 202A/202B and/or eye tracking sensor(s) 212A/212B are implemented separate from the display generation component(s) 214A/214B.
In some examples, the hand tracking sensor(s) 202A/202B can use image sensor(s) 206A/206B (e.g., one or more IR cameras, 3D cameras, depth cameras, etc.) that capture three-dimensional information from the real-world including one or more hands (e.g., of a human user). In some examples, the hands can be resolved with sufficient resolution to distinguish fingers and their respective positions. In some examples, one or more image sensor(s) 206A/206B are positioned relative to the user to define a field of view of the image sensor(s) 206A/206B and an interaction space in which finger/hand position, orientation and/or movement captured by the image sensors are used as inputs (e.g., to distinguish from a user's resting hand or other hands of other persons in the real-world environment). Tracking the fingers/hands for input (e.g., gestures, touch, tap, etc.) can be advantageous in that it does not require the user to touch, hold or wear any sort of beacon, sensor, or other marker.
In some examples, eye tracking sensor(s) 212A/212B includes at least one eye tracking camera (e.g., infrared (IR) cameras) and/or illumination sources (e.g., IR light sources, such as LEDs) that emit light towards a user's eyes. The eye tracking cameras may be pointed towards a user's eyes to receive reflected IR light from the light sources directly or indirectly from the eyes. In some examples, both eyes are tracked separately by respective eye tracking cameras and illumination sources, and a focus/gaze can be determined from tracking both eyes. In some examples, one eye (e.g., a dominant eye) is tracked by a respective eye tracking camera/illumination source(s).
Device 260/270 and system 201 are not limited to the components and configuration of
As shown in
As mentioned above, in some examples, the first electronic device 360 is optionally in a multi-user communication session with the second electronic device 370. For example, the first electronic device 360 and the second electronic device 370 (e.g., via communication circuitry 222A/222B) are configured to present a shared three-dimensional environment 350A/350B that includes one or more shared virtual objects (e.g., content such as images, video, audio and the like, representations of user interfaces of applications, etc.). As used herein, the term “shared three-dimensional environment” refers to a three-dimensional environment that is independently presented, displayed, and/or visible at two or more electronic devices via which content, applications, data, and the like may be shared and/or presented to users of the two or more electronic devices. In some examples, while the first electronic device 360 is in the multi-user communication session with the second electronic device 370, an avatar corresponding to the user of one electronic device is optionally displayed in the three-dimensional environment that is displayed via the other electronic device. For example, as shown in
In some examples, the presentation of avatars 315/317 as part of a shared three-dimensional environment is optionally accompanied by an audio effect corresponding to a voice of the users of the electronic devices 370/360. For example, the avatar 315 displayed in the three-dimensional environment 350A using the first electronic device 360 is optionally accompanied by an audio effect corresponding to the voice of the user of the second electronic device 370. In some such examples, when the user of the second electronic device 370 speaks, the voice of the user may be detected by the second electronic device 370 (e.g., via the microphone(s) 213B) and transmitted to the first electronic device 360 (e.g., via the communication circuitry 222B/222A), such that the detected voice of the user of the second electronic device 370 may be presented as audio (e.g., using speaker(s) 216A) to the user of the first electronic device 360 in three-dimensional environment 350A. In some examples, the audio effect corresponding to the voice of the user of the second electronic device 370 may be spatialized such that it appears to the user of the first electronic device 360 to emanate from the location of avatar 315 in the shared three-dimensional environment 350A. Similarly, the avatar 317 displayed in the three-dimensional environment 350B using the second electronic device 370 is optionally accompanied by an audio effect corresponding to the voice of the user of the first electronic device 360. In some such examples, when the user of the first electronic device 360 speaks, the voice of the user may be detected by the first electronic device 360 (e.g., via the microphone(s) 213A) and transmitted to the second electronic device 370 (e.g., via the communication circuitry 222A/222B), such that the detected voice of the user of the first electronic device 360 may be presented as audio (e.g., using speaker(s) 216B) to the user of the second electronic device 370 in three-dimensional environment 350B. In some examples, the audio effect corresponding to the voice of the user of the first electronic device 360 may be spatialized such that it appears to the user of the second electronic device 370 to emanate from the location of avatar 317 in the shared three-dimensional environment 350B.
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with respective orientations that correspond to and/or are based on orientations of the electronic devices 360/370 (and/or the users of electronic devices 360/370) in the physical environments surrounding the electronic devices 360/370. For example, as shown in
In some examples, while in the multi-user communication session, the avatars 315/317 are displayed in the three-dimensional environments 350A/350B with regard to physical objects in the physical environments surrounding the electronic devices 360/370. For example, at the first electronic device 360, the avatar 315 corresponding to the user of the second electronic device 370 is optionally displayed at a predetermined location in the three-dimensional environment 350A (e.g., beside the representation of the table 306′). Similarly, at the second electronic device 370, the avatar 317 corresponding to the user of the first electronic device 360 is optionally displayed at a predetermined location in the three-dimensional environment 350B (e.g., to the right of the representation of the coffee table 308′). In some examples, the predetermined locations at which the avatars 315/317 are displayed in the three-dimensional environments 350A/350B are selected with respect to physical objects in the physical environments surrounding the electronic devices 360/370. For example, the avatar 315 is displayed in the three-dimensional environment 350A at a respective location that is not obscured by a (e.g., representation of) a physical object (e.g., the table 306′), and the avatar 317 is displayed in the three-dimensional environment 350B at a respective location that is not obscured by a (e.g., representation of) a physical object (e.g., the coffee table 308′ or floor lamp 307′).
Additionally, in some examples, while in the multi-user communication session, a viewpoint of the three-dimensional environments 350A/350B and/or a location of the viewpoint of the three-dimensional environments 350A/350B optionally changes in accordance with movement of the electronic devices 360/370 (e.g., by the users of the electronic devices 360/370). For example, while in the communication session, if the electronic device 360 is moved closer toward the representation of the table 306′ and/or the avatar 315 (e.g., because the user of the electronic device 360 moved forward in the physical environment surrounding the electronic device 360), the viewpoint of the three-dimensional environment 350A would change accordingly, such that the representation of the table 306′, the representation of the window 309′ and the avatar 315 appear larger in the field of view. In some examples, each user may independently interact with the three-dimensional environment 350A/350B, such that changes in viewpoints of the three-dimensional environment 350A and/or interactions with virtual objects in the three-dimensional environment 350A by the first electronic device 360 optionally do not affect what is shown in the three-dimensional environment 350B at the second electronic device 370, and vice versa.
In some examples, the avatars 315/317 are a representation (e.g., a full-body rendering) of the users of the electronic devices 370/360. In some examples, the avatar 315/317 is a representation of a portion (e.g., a rendering of a head, face, head and torso, etc.) of the users of the electronic devices 370/360. In some examples, the avatars 315/317 are a user-personalized, user-selected, and/or user-created representation displayed in the three-dimensional environments 350A/350B that is representative of the users of the electronic devices 370/360. It should be understood that, while the avatars 315/317 illustrated in
As mentioned above, while the first electronic device 360 and the second electronic device 370 are in the multi-user communication session, the three-dimensional environments 350A/350B may be a shared three-dimensional environment that is presented using the electronic devices 360/370. In some examples, content that is viewed by one user at one electronic device may be shared with another user at another electronic device in the multi-user communication session. In some such examples, the content may be experienced (e.g., viewed and/or interacted with) by both users (e.g., via their respective electronic devices) in the shared three-dimensional environment, as described in more detail below.
It should be understood that, in some examples, more than two electronic devices may be communicatively linked in a multi-user communication session. For example, in a situation in which three electronic devices are communicatively linked in a multi-user communication session, a first electronic device would display two avatars, rather than just one avatar, corresponding to the users of the other two electronic devices. It should therefore be understood that the various processes and exemplary interactions described herein with reference to the first electronic device 360 and the second electronic device 370 in the multi-user communication session optionally apply to situations in which more than two electronic devices are communicatively linked in a multi-user communication session.
In some examples, it may be advantageous to selectively control the display of the avatars corresponding to the users of electronic devices that are communicatively linked in a multi-user communication session. For example, as described herein, content may be shared and presented in the three-dimensional environment such that the content is optionally viewable by and/or interactive to multiple users in the multi-user communication session. As discussed above, the three-dimensional environment optionally includes avatars corresponding to the users of the electronic devices that are in the communication session. In some instances, the presentation of the content in the three-dimensional environment with the avatars corresponding to the users of the electronic devices may cause portions of the content to be blocked or obscured from a viewpoint of one or more users in the multi-user communication session and/or may distract one or more users in the multi-user communication session. In some examples, presentation of the content in the three-dimensional environment with the avatars corresponding to the users of the electronic devices may not cause portions of the content to be blocked or obscured from a viewpoint of one or more users in the multi-user communication session and/or may not distract one or more users in the multi-user communication session. Additionally, in some examples, it may be advantageous to, when presenting content in the three-dimensional environment during a multi-user communication session, cease display of the avatars corresponding to the users of the electronic devices depending on the type of content that is being presented, as described herein in more detail.
In some examples, the three-dimensional environments 450A/450B may include one or more virtual objects (e.g., corresponding to virtual object 110 shown in
As shown in
As shown in
In some examples, rather than ceasing display of the avatars 415/417 in three-dimensional environments 450A/450B, the electronic devices 460/470 may replace display of the avatars 415/417 with alternative representations in three-dimensional environments 450A/450B. For example, the first electronic device 460 may replace display of the avatar 415 corresponding to the user of the second electronic device 470 with an alternative representation of the user, such as a bubble (e.g., similar to audio bubble 414), an abstract representation of the user (e.g., such as a cloud), a three-dimensional or two-dimensional point (e.g., circular point, rectangular point, or triangular point), and the like. Similarly, in some examples, the second electronic device 470 may replace display of the avatar 417 corresponding to the user of the first electronic device 460 with an alternative representation of the user, such as one of those described above. It should be understood that, in some such examples, the alternative representations of the users in the three-dimensional environments 450A/450B may be accompanied by audio corresponding to the voices of the users, as discussed above.
As mentioned previously herein, content can be shared in the three-dimensional environments 450A/450B while the first electronic device 460 and the second electronic device 470 are communicatively linked in the multi-user communication session. In some examples, the immersive content 452 displayed at the first electronic device 460 can be shared with the second electronic device 470 such that the immersive content 452 can also be displayed in three-dimensional environment 450B at the second electronic device 470. For example, the user of the first electronic device 460 can provide a respective input corresponding to a request to share the immersive content 452 with the second electronic device 470.
As shown in
In some examples, in response to receiving the selection input directed to the first option 411, while the first electronic device 460 and the second electronic device 470 are in the communication session, the second electronic device 470 receives an indication of a request from the first electronic device 460 to share content with the second electronic device 470. For example, as shown in
In some examples, in response to receiving the selection input 472B directed to the option 419A in the second user interface element 418, the second electronic device 470 updates display of the three-dimensional environment 450B to include the immersive content 452 (e.g., the immersive art gallery), as shown in
In some examples, in response to displaying the immersive content 452 in three-dimensional environment 450B, such that the immersive content 452 is displayed at both the first electronic device 460 and the second electronic device 470, the avatars corresponding to the users of the electronic devices 460/470 are redisplayed in the three-dimensional environment. For example, as shown in
In some examples, while displaying the immersive content 452 in the three-dimensional environments 450A/450B, the user of the first electronic device 460 may receive a notification 420 corresponding to a trigger from a respective application (e.g., a respective application associated with one of the plurality of virtual objects 410 shown in
As shown in
As shown in
In some examples, the second electronic device 470 may cease displaying the immersive content 452 in the three-dimensional environment 450B. For example, the user of the second electronic device 470 may provide one or more respective inputs (e.g., pinch, tap, touch, verbal, etc.) corresponding to a request to navigate away from (e.g., cease displaying) the immersive content 452. In some examples, the second electronic device 470 may cease displaying the immersive content 452 in response to detecting that the first electronic device 460 is no longer displaying the immersive content 452 in three-dimensional environment 450A. For example, after detecting that the first electronic device 460 is no longer displaying the immersive content 452 in three-dimensional environment 450A, the second electronic device 470 may lose access (e.g., entitlement) to the immersive content 452 that was initially shared by the first electronic device 460. In some such examples, the second electronic device 470 may cease displaying the immersive content 452 in three-dimensional environment 450B after a threshold period (e.g., 1, 1.5, 2, 2, 4, 5, 8, or 10 s) has elapsed since the first electronic device 460 stopped displaying the immersive content 452.
In some examples, when the immersive content 452 is no longer displayed in the three-dimensional environments 450A/450B shared between the first electronic device 460 and the second electronic device 470, the avatars corresponding to the users of the electronic devices 460/470 are redisplayed in three-dimensional environments 450A/450B. For example, the first electronic device 460 optionally redisplays the avatar 415 corresponding to the user of the second electronic device 470 in three-dimensional environment 450A, and the second electronic device 470 optionally redisplays the avatar 417 corresponding to the user of the first electronic device 460 in three-dimensional environment 450B (e.g., as similarly shown in
It should be understood that, while the immersive content 452 was described above as being an immersive art gallery, any type of immersive content can be provided. For example, the immersive content may refer to a video game, an immersive environmental rendering (e.g., a three-dimensional representation of a beach or a forest), a computer-generated model (e.g., a three-dimensional mockup of a house designed in a computer graphics application), and the like. Each of these types of immersive content optionally follow the above-described behavior for dictating the display of avatars in the shared three-dimensional environment. In some examples, the immersive content may refer to any content that may be navigated by a user with six degrees of freedom.
As described herein, various types of content can be shared between multiple devices while in the multi-user communication session. Attention is now directed to sharing an alternative type of content (e.g., a second type of content) in the three-dimensional environment shared between the first electronic device and the second electronic device. As described below, content that includes immersive content (e.g., video or a three-dimensional scene/environment) that is shared between the first electronic device and the second electronic device and displayed in the three-dimensional environment optionally causes the first electronic device and the second electronic device to cease displaying the avatars corresponding to the users in the shared three-dimensional environment.
As shown in
In some examples, virtual objects (e.g., application windows and user interfaces, representations of content, application icons, and the like) that are viewable by a user may be private while the user is participating in a multi-user communication session with one or more other users (e.g., via electronic devices that are communicatively linked in the multi-user communication session). For example, as discussed above, the user of the first electronic device 560 is optionally viewing the user interface element 524 in three-dimensional environment 550A. In some examples, a representation of the user interface element is displayed in three-dimensional environment 550B at the second electronic device 570 with the avatar 517 corresponding to the user of the first electronic device 560. In some such examples, the representation of the user interface element 524 displayed in three-dimensional environment 550B is optionally an occluded (e.g., a faded or blurred) representation of the user interface element 524 displayed in three-dimensional environment 550A. For example, the user of the second electronic device 570 is prevented from viewing the contents of the user interface element 524 displayed in three-dimensional environment 550A at the first electronic device 560.
As shown in
As discussed above with reference to
In some examples, as previously described herein, the immersive content 554 may be shared with the second electronic device 570 for displaying the immersive content 554 in three-dimensional environment 550B. For example, while the first electronic device 560 and the second electronic device 570 are in the multi-user communication session, the user of the first electronic device 560 may provide one or more inputs for sharing the immersive content 554 with the second electronic device 570 (e.g., via a “share” affordance displayed in a respective user interface element or application user interface in three-dimensional environment 550A, a verbal command, etc.). In some examples, the second electronic device 570 may detect an indication corresponding to a request from the first electronic device 560 to share the immersive content 554 with the second electronic device 570. In response to detecting the indication, the second electronic device 570 may display a respective user interface element 526 corresponding to the share request. For example, as shown in
In some examples, as shown in
In some examples, in response to displaying the immersive video 554 in three-dimensional environment 550B, such that the immersive video 554 is displayed at both the first electronic device 560 and the second electronic device 570, the avatars corresponding to the users of the electronic devices 560/570 are not redisplayed in the three-dimensional environments. For example, as shown in
Accordingly, as outlined above, when sharing immersive content that is viewable from a limited perspective (e.g., with three degrees of freedom, such as an immersive video) in a multi-user communication session, the electronic devices 560/570 optionally forgo displaying the avatars corresponding to the users of the electronic devices and maintain presentation of the audio corresponding to the voices of the users. In some examples, the audio corresponding to the voices of the users may no longer be spatialized when corresponding avatars are not displayed. Thus, one advantage of the disclosed method of displaying immersive content in a multi-user communication session is that users may continue interacting with each other verbally while an unobscured view of the immersive content is maintained in the shared three-dimensional environment.
In some examples, the user of electronic devices 560/570 may provide one or more respective inputs corresponding to a request to cease displaying the immersive content 554 in three-dimensional environments 550A/550B. For example, while the first electronic device 560 and the second electronic device 570 are in the multi-user communication session, the user of the second electronic device 570 may provide one or more inputs for ceasing display of the immersive content 554 in the three-dimensional environment 550B (e.g., via a “close” or “exit” affordance displayed in a respective user interface element in three-dimensional environment 550B, a verbal command, etc.). In some examples, in response to receiving the one or more respective inputs, the second electronic device optionally ceases display of the immersive content in three-dimensional environment 550B, as shown in
In some examples, after the immersive content 554 ceases to be displayed in three-dimensional environment 550B, the first electronic device 560 and the second electronic device 570 forgo redisplaying the avatars corresponding to the users of the electronic devices 560 and 570. As shown in
It should be understood that, in some examples, if the user of the first electronic device 560 were to provide one or more respective inputs (e.g., such as pinch, tap, touch, verbal, etc. described above) corresponding to a request to cease displaying the immersive video 554, in response to receiving the one or more respective inputs, the first electronic device 560 would cease displaying the immersive video 554 in three-dimensional environment 550A. Additionally, after ceasing display of the immersive video 554 at the first electronic device 560, the first electronic device 560 and the second electronic device 570 would redisplay the avatars corresponding to the users of the electronic devices 560/570 in the three-dimensional environments (e.g., as similarly shown in
As described herein, various types of content can be shared between multiple devices while in the multi-user communication session. Attention is now directed to sharing an alternative type of content (e.g., a third type of content) in the three-dimensional environment shared between the first electronic device and the second electronic device. As described below, content that includes non-immersive content (e.g., two-dimensional images, two-dimensional videos, three-dimensional objects, or the like) that is shared between the first electronic device and the second electronic device and displayed in the three-dimensional environment optionally causes the first electronic device and the second electronic device to maintain displaying the avatars corresponding to the users in the shared three-dimensional environment.
As shown in
As used herein, display of video content in a “full-screen mode” in the three-dimensional environments 650A/650B optionally refers to display of the video content at a respective size and/or with a respective visual emphasis in the three-dimensional environments 650A/650B. For example, the electronic devices 660/670 may display the video content at a size that is larger than (e.g., 1.2×, 1.4×, 1.5×, 2×, 2.5×, or 3×) the size of the third virtual object 632 containing the option 627 in three-dimensional environments 650A/650B. Additionally, for example, the video content may be displayed with a greater visual emphasis than other virtual objects and/or representations of physical objects displayed in three-dimensional environments 650A/650B. As described in more detail below, while the video content is displayed in the full-screen mode, the first, second, and third virtual objects 626, 630, and 632 may become visually deemphasized (e.g., may cease being displayed in three-dimensional environments 650A/650B), and the captured portions of the physical environment surrounding the electronic devices 660/670 may become faded and/or darkened in three-dimensional environments 650A/650B.
As described previously with reference to
As shown in
As shown in
As discussed previously herein, in some examples, while the first electronic device 660 and the second electronic device 670 are communicatively linked in the multi-user communication session, when one electronic device displays certain types of content in the three-dimensional environment that has not been shared with the other electronic device, the avatars corresponding to the users of the electronic devices 660/670 cease to be displayed. For example, as shown in
In some examples, as previously described herein, the video content 656 may be shared with the second electronic device 670 for displaying the video content 656 in three-dimensional environment 650B. For example, while the first electronic device 660 and the second electronic device 670 are in the multi-user communication session, the user of the first electronic device 660 may provide one or more inputs for sharing the video content 656 with the second electronic device 670 (e.g., via a “share” affordance displayed in a respective user interface element or application user interface in three-dimensional environment 650A, a verbal command, etc.). In some examples, the second electronic device 670 may detect an indication corresponding to a request from the first electronic device 660 to share the content 656 with the second electronic device 670. In response to detecting the indication, the second electronic device 670 may display a respective user interface element 634 corresponding to the share request. For example, as shown in
In some examples, in response to detecting the selection input 672B, the second electronic device 670 optionally presents the video content 656 in the three-dimensional environment 650B, as shown in
As shown in
As shown in
In some examples, the first electronic device 660 and the second electronic device 670 may reorient and/or reposition the avatars corresponding to the users of the electronic devices 660/670 when the two-dimensional content 656 is displayed in the shared three-dimensional environment. For example, as shown in
As similarly described above, in some examples, the user of electronic devices 660/670 may provide one or more respective inputs corresponding to a request to cease displaying the video content 656 in three-dimensional environments 650A/650B. For example, while the first electronic device 660 and the second electronic device 670 are in the multi-user communication session, the user of the first electronic device 660 (or the second electronic device 670) may provide one or more inputs for ceasing display of the video content 656 in the three-dimensional environment 650A (or 650B) (e.g., via a “close” or “exit” affordance displayed in a respective user interface element in three-dimensional environment 650A (or 650B), a verbal command, etc.). In some such examples, in response to receiving the one or more respective inputs, the content 656 may cease being displayed at the first electronic device 660 (or the second electronic device 670). For example, the first electronic device 660 (or second electronic device 670) optionally ceases visually deemphasizing the captured portions of the physical environment surrounding the electronic device.
As similarly described above, in some examples, if the video content 656 were to cease being displayed in three-dimensional environment 650A (or 650B), the first electronic device 660 and the second electronic device 670 would cease displaying the avatars corresponding to the users of the electronic devices 660/670. For example, because the video content 656 would still be displayed at one of the two electronic devices, the avatar 615 corresponding to the user of the second electronic device 670 would cease being displayed in three-dimensional environment 650A and the avatar 617 corresponding to the user of the first electronic device 660 would cease being displayed in three-dimensional environment 560B. It should be understood that, in some such examples, though the avatars corresponding to the users of the electronic devices 660/670 would not be displayed in the three-dimensional environments, the presentation of the audio corresponding to the voices of the users of the electronic devices would optionally be maintained. However, in some examples, that audio may not be spatialized and may instead be presented in mono or stereo. In some examples, once the other electronic device ceases displaying the video content (e.g., due to user input), the first electronic device 660 and the second electronic device 670 would redisplay the avatars corresponding to the users of the electronic devices 660/670 in the three-dimensional environments (e.g., as similarly shown in
As described herein, various types of content can be shared between multiple devices while in the multi-user communication session. Attention is now directed to sharing an alternative type of content (e.g., a fourth type of content) in the three-dimensional environment shared between the first electronic device and the second electronic device. As described below, non-immersive content displayed in a two-dimensional object or three-dimensional object that is shared between the first electronic device and the second electronic device and displayed in the three-dimensional environment optionally causes the first electronic device and the second electronic device to maintain displaying the avatars corresponding to the users in the shared three-dimensional environment.
As shown in
As described previously with reference to
As shown in
As shown in
As discussed above with reference to
In some examples, the third virtual object 732 (e.g., the video playback application window) may be shared with the second electronic device 770 for displaying the video content 758 within the third virtual object 732 in three-dimensional environment 750B. For example, while the first electronic device 760 and the second electronic device 770 are in the multi-user communication session, the user of the first electronic device 760 may provide one or more inputs for sharing the third virtual object 732 with the second electronic device 760 (e.g., via a “share” affordance displayed in a respective user interface element or application user interface in three-dimensional environment 750A, a verbal command, etc.). In some examples, the second electronic device 770 may detect an indication corresponding to a request from the first electronic device 760 to share the virtual object (e.g., and the content 758) with the second electronic device 770. In response to detecting the indication, the second electronic device 770 may display a respective user interface element 734 corresponding to the share request. For example, as shown in
In some examples, in response to detecting the selection input 772B, the second electronic device 770 optionally presents the video content 758 within the third virtual object 732 in the three-dimensional environment 750B, as shown in
As shown in
As shown in
In some examples, the first electronic device 760 and the second electronic device 770 may reorient and/or reposition the avatars corresponding to the users of the electronic devices 760/770 when the two-dimensional video content 758 is displayed in the shared three-dimensional environment. For example, as shown in
As similarly described above, in some examples, the users of electronic devices 760/770 may provide one or more respective inputs corresponding to a request to cease displaying the video content 758 within the third virtual object 732 in three-dimensional environments 750A/750B. For example, while the first electronic device 760 and the second electronic device 770 are in the multi-user communication session, the user of the first electronic device 760 (or the second electronic device 770) may provide one or more inputs for ceasing display of the video content 758 within the virtual object 732 in the three-dimensional environment 750A (or 750B) (e.g., via a “close” or “exit” affordance displayed in a respective user interface element in three-dimensional environment 750A (or 750B), a verbal command, etc.). In some such examples, in response to receiving the one or more respective inputs, the content 758 may cease being displayed at the first electronic device 760 (or the second electronic device 770). For example, the first electronic device 760 (or second electronic device 770) optionally ceases visually deemphasizing the captured portions of the physical environment surrounding the electronic device.
In some examples, if the video content 758 were to cease being displayed in three-dimensional environment 750A (or 750B), the first electronic device 760 and the second electronic device 770 would maintain display of the avatars corresponding to the users of the electronic devices 760/770. For example, although the video content 758 would no longer be displayed at one of the two electronic devices, the avatar 715 corresponding to the user of the second electronic device 770 would cease being displayed in three-dimensional environment 750A and the avatar 717 corresponding to the user of the first electronic device 760 would not obscure or distract from the other user's experience of the content 758, so the avatars 715 and 717 would optionally remain displayed. It should be understood that, once the other electronic device ceases displaying the video content (e.g., due to user input), the first electronic device 760 and the second electronic device 770 would maintain display of the avatars corresponding to the users of the electronic devices 760/770 in the three-dimensional environment (e.g., as similarly shown in
It is understood that the examples shown and described herein are merely exemplary and that additional and/or alternative elements may be provided within the three-dimensional environment for interacting with the illustrative content. It should be understood that the appearance, shape, form and size of each of the various user interface elements and objects shown and described herein are exemplary and that alternative appearances, shapes, forms and/or sizes may be provided. For example, the virtual objects representative of application windows (e.g., virtual objects 626, 630 and 632) may be provided in an alternative shape than a rectangular shape, such as a circular shape, triangular shape, etc. In some examples, the various selectable options (e.g., the options 411 and 413, the option 523A, or the option 627), user interface elements (e.g., user interface element 526 or user interface element 634), control elements (e.g., playback controls 625 or 725), etc. described herein may be selected verbally via user verbal commands (e.g., “select option” verbal command). Additionally or alternatively, in some examples, the various options, user interface elements, control elements, etc. described herein may be selected and/or manipulated via user input received via one or more separate input devices in communication with the electronic device(s). For example, selection input may be received via physical input devices, such as a mouse, trackpad, keyboard, etc. in communication with the electronic device(s).
Additionally, it should be understood that, although the above methods are described with reference to two electronic devices, the above methods optionally apply for two or more electronic devices communicatively linked in a communication session. For example, while three, four, five, or more electronic devices are in a communication session represented by a three-dimensional environment, and content of a first type (e.g., an immersive scene or experience that provides the user with six degrees of freedom, such as an immersive art gallery/exhibit, video game, or three-dimensional model) is shared, a respective user viewing the content may see the avatars corresponding to (and hear spatial audio of) the users of other electronic devices within a three-dimensional environment corresponding to the content of the first type who are also viewing the content (e.g., with spatial truth as similarly described with reference to
If the content is the second type of content (e.g., an immersive video or scene/environment that provides the user with three degrees of freedom, such as an immersive movie, TV episode, sports game, musical recording), a respective user viewing the content may see the avatars corresponding to (and hear spatial audio of) the users of electronic devices within a three-dimensional environment corresponding to the content of the second type who are also viewing the content from different perspectives (e.g., as similarly described with reference to
If the content is the third type of content (e.g., a non-immersive (two-dimensional) video/image/web page that is displayed in a full-screen mode, such as a two-dimensional representation of a movie, TV episode, sports game, musical recording, or user interface), a respective user viewing the content may see the avatars corresponding to (and hear spatial audio of) the users of electronic devices in the three-dimensional environment representing the communication session who are also viewing the content in the full-screen mode (e.g., as similarly described with reference to
As shown in
In some examples, at 806, while presenting the first computer-generated environment including the avatar corresponding to the user of the second electronic device, the first electronic device may receive, via the one or more input devices (e.g., such as hand-tracking sensors 202 in
In some examples, at 808, in response to receiving the first indication, at 810, in accordance with a determination that the request is accepted (e.g., because a selection input (e.g., selection input 472B in
As shown in
It is understood that process 800 is an example and that more, fewer, or different operations can be performed in the same or in a different order. Additionally, the operations in process 800 described above are, optionally, implemented by running one or more functional modules in an information processing apparatus such as general-purpose processors (e.g., as described with respect to
Therefore, according to the above, some examples of the disclosure are directed to a method. In some examples, the method comprises, at a first electronic device in communication with a display, one or more input devices, and a second electronic device: while in a communication session with the second electronic device, presenting, via the display, a first computer-generated environment including an avatar corresponding to a user of the second electronic device; while displaying the first computer-generated environment including the avatar corresponding to the user of the second electronic device, receiving, via the one or more input devices, a first indication corresponding to a request from the second electronic device to share content with the first electronic device; and in response to receiving the first indication, in accordance with a determination that the request is accepted, in accordance with a determination that the content shared with the first electronic device is a first type of content, replacing display of the first computer-generated environment with a second computer-generated environment corresponding to the content, and displaying the avatar corresponding to the user of the second electronic device in the second computer-generated environment, and in accordance with a determination that the content shared with the first electronic device is a second type of content, different from the first type of content, updating display of the first computer-generated environment to include a first object corresponding to the content, and ceasing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, displaying the avatar corresponding to the user of the second electronic device includes presenting audio corresponding to a voice of the user of the second electronic device. In some examples, the method further comprises, in accordance with the determination that the content shared with the first electronic device is the second type of content, different from the first type of content, maintaining presentation of the audio corresponding to the voice of the user of the second electronic device in the first computer-generated environment after ceasing display of the avatar corresponding to the user of the second electronic device.
Additionally or alternatively, in some examples, presenting the audio corresponding to the voice of the user of the second electronic device includes presenting spatial audio corresponding to the voice of the user of the second electronic device in the first computer-generated environment. In some examples, in accordance with the determination that the content shared with the first electronic device is the second type of content, the audio corresponding to the voice of the user of the second electronic device presented in the first computer-generated environment is non-spatial audio.
Additionally or alternatively, in some examples, the first electronic device and the second electronic device are a head-mounted display, respectively.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, displaying, via the display, a user interface element corresponding to the content in the first computer-generated environment.
Additionally or alternatively, in some examples, the user interface element includes one or more options that are selectable to accept the request from the second electronic device to share the content with the first electronic device.
Additionally or alternatively, in some examples, the first type of content is content that includes a three-dimensional immersive environment.
Additionally or alternatively, in some examples, the second computer-generated environment corresponding to the content is a representation of the three-dimensional immersive environment. In some examples, the method further comprises: while displaying the second computer-generated environment, detecting, via the one or more input devices, movement of the first electronic device in a physical environment surrounding the first electronic device from a first location to a second location; and in response to detecting the movement of the first electronic device, changing a location of a viewpoint of the user of the first electronic device in the second computer-generated environment from a first respective location to a second respective location, wherein the second respective location in the second computer-generated environment is based on the second location in the physical environment, and maintaining display of the avatar corresponding to the user of the second electronic device in the second computer-generated environment.
Additionally or alternatively, in some examples, the second type of content is content that includes a viewpoint-limited three-dimensional immersive video, scene, or environment.
Additionally or alternatively, in some examples, the first object corresponding to the content is a representation of the view-point limited three-dimensional immersive video, scene, or environment. In some examples, the method further comprises: while displaying the first computer-generated environment including the first object, detecting, via the one or more input devices, movement of a respective portion of the user of the first electronic device from a first pose to a second pose; and in response to detecting the movement of the respective portion of the user, changing a viewpoint of the first object in the first computer-generated environment from a first viewpoint to a second viewpoint, different from the first viewpoint, wherein the second viewpoint is based on the second pose of the respective portion of the user, and restricting changing a location of the viewpoint of the user of the first electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the request is accepted, in accordance with a determination that the content shared with the first electronic device is a third type of content, different from the first type of content and the second type of content, updating display of the first computer-generated environment to include a second object, different from the first object, corresponding to the content, and maintaining display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the third type of content is content that includes two-dimensional content configured to be displayed in the second object in the first computer-generated environment.
Additionally or alternatively, in some examples, before receiving the first indication corresponding to the request from the second electronic device to share content with the first electronic device, the first computer-generated environment includes a respective object. In some examples, after receiving the first indication and after the request has been accepted, in accordance with the determination that the content shared with the first electronic device is the third type of content, the respective object is no longer displayed in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the first indication, in accordance with a determination that the request is accepted, in accordance with a determination that the content shared with the first electronic device is a fourth type of content, different from the first type of content, the second type of content, and the third type of content: updating display of the first computer-generated environment to include a third object, different from the first object and the second object, corresponding to the content; and maintaining display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the fourth type of content is an application object associated with an application running on the second electronic device, the application object is configured to display second content, and the third object corresponds to the application object in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: after replacing display of the first computer-generated environment with the second computer-generated environment corresponding to the content and displaying the avatar corresponding to the user of the second electronic device in the second computer-generated environment in accordance with the determination that the content shared with the first electronic device is the first type of content, receiving, via the one or more input devices, a second input corresponding to a request to navigate away from the second computer-generated environment corresponding to the content; and in response to receiving the second input, replacing display of the second computer-generated environment with the first computer-generated environment, and forgoing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first computer-generated environment that does not include the avatar corresponding to the user of the second electronic device, receiving a second indication that the second electronic device is no longer displaying the second computer-generated environment; and in response to detecting the second indication, redisplaying the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: after updating display of the first computer-generated environment to include the first object corresponding to the content and ceasing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment in accordance with the determination that the content shared with the first electronic device is the second type of content, receiving, via the one or more input devices, a second input corresponding to a request to cease display of the first object corresponding to the content; and in response to receiving the second input, ceasing display of the first object corresponding to the content in the first computer-generated environment, and forgoing redisplay of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first computer-generated environment that does not include the avatar corresponding to the user of the second electronic device, receiving a second indication that the second electronic device is no longer displaying the first object in the first computer-generated environment; and in response to detecting the second indication, redisplaying the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, when the first indication corresponding to the request from the second electronic device to share content with the first electronic device is received, the content is displayed at the second electronic device.
Additionally or alternatively, in some examples, the method further comprises: while displaying the first computer-generated environment including the avatar corresponding to the user of the second electronic device and before receiving the first indication corresponding to the request from the second electronic device to share content with the first electronic device, receiving, via the one or more input devices, a respective indication that the second electronic device is presenting content; and in response to receiving the respective indication, in accordance with a determination that the content presented at the second electronic device is the first type of content, ceasing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment, and in accordance with a determination that the content presented at the second electronic device is the second type of content, ceasing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Additionally or alternatively, in some examples, the method further comprises, in response to receiving the respective indication: in accordance with a determination that the content presented at the second electronic device is a third type of content, different from the first type and the second type of content, ceasing display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment; and in accordance with a determination that the content presented at the second electronic device is a fourth type of content, different from the first type, the second type, and the third type of content, maintaining display of the avatar corresponding to the user of the second electronic device in the first computer-generated environment.
Some examples of the disclosure are directed to an electronic device, comprising one or more processors, memory, and one or more programs stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing any of the above methods.
Some examples of the disclosure are directed to a non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions, which when executed by one or more processors of an electronic device, cause the electronic device to perform any of the above methods.
Some examples of the disclosure are directed to an electronic device, comprising one or more processors, memory, and means for performing any of the above methods.
Some examples of the disclosure are directed to an information processing apparatus for use in an electronic device, the information processing apparatus comprising means for performing any of the above methods.
The foregoing description, for purpose of explanation, has been described with reference to specific examples. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The examples were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described examples with various modifications as are suited to the particular use contemplated.
This application claims the benefit of U.S. Provisional Application No. 63/268,679, filed Feb. 28, 2022, the content of which is incorporated herein by reference in its entirety for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
1173824 | Mckee | Feb 1916 | A |
5515488 | Hoppe et al. | May 1996 | A |
5524195 | Clanton et al. | Jun 1996 | A |
5610828 | Kodosky et al. | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn et al. | May 1998 | A |
5758122 | Corda et al. | May 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5877766 | Bates et al. | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy et al. | Nov 1999 | A |
6061060 | Berry et al. | May 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas et al. | Aug 2000 | A |
6154559 | Beardsley | Nov 2000 | A |
6456296 | Cataudella et al. | Sep 2002 | B1 |
6584465 | Zhu et al. | Jun 2003 | B1 |
6756997 | Ward et al. | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7137074 | Newton et al. | Nov 2006 | B1 |
7230629 | Reynolds et al. | Jun 2007 | B2 |
7706579 | Oijer | Apr 2010 | B2 |
8593558 | Gardiner et al. | Nov 2013 | B2 |
8793620 | Stafford | Jul 2014 | B2 |
8803873 | Yoo et al. | Aug 2014 | B2 |
8805690 | Lebeau et al. | Aug 2014 | B1 |
8866880 | Tan et al. | Oct 2014 | B2 |
8896632 | Macdougall et al. | Nov 2014 | B2 |
8947323 | Raffle et al. | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim et al. | Mar 2015 | B2 |
8994718 | Latta et al. | Mar 2015 | B2 |
9007301 | Raffle et al. | Apr 2015 | B1 |
9108109 | Pare et al. | Aug 2015 | B2 |
9185062 | Yang et al. | Nov 2015 | B1 |
9201500 | Srinivasan et al. | Dec 2015 | B2 |
9256785 | Qvarfordt | Feb 2016 | B2 |
9293118 | Matsui | Mar 2016 | B2 |
9400559 | Latta et al. | Jul 2016 | B2 |
9448635 | Macdougall et al. | Sep 2016 | B2 |
9448687 | McKenzie et al. | Sep 2016 | B1 |
9465479 | Cho et al. | Oct 2016 | B2 |
9491374 | Avrahami et al. | Nov 2016 | B1 |
9526127 | Taubman et al. | Dec 2016 | B1 |
9544257 | Ogundokun et al. | Jan 2017 | B2 |
9563331 | Poulos et al. | Feb 2017 | B2 |
9575559 | Andrysco | Feb 2017 | B2 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun et al. | Jun 2017 | B2 |
9734402 | Jang et al. | Aug 2017 | B2 |
9778814 | Ambrus et al. | Oct 2017 | B2 |
9851866 | Goossens et al. | Dec 2017 | B2 |
9886087 | Wald et al. | Feb 2018 | B1 |
9933833 | Tu et al. | Apr 2018 | B2 |
9934614 | Ramsby et al. | Apr 2018 | B2 |
10049460 | Romano et al. | Aug 2018 | B2 |
10203764 | Katz et al. | Feb 2019 | B2 |
10307671 | Barney et al. | Jun 2019 | B2 |
10353532 | Holz et al. | Jul 2019 | B1 |
10394320 | George-Svahn et al. | Aug 2019 | B2 |
10431216 | Lemon et al. | Oct 2019 | B1 |
10530731 | Wu et al. | Jan 2020 | B1 |
10534439 | Raffa et al. | Jan 2020 | B2 |
10664048 | Cieplinski et al. | May 2020 | B2 |
10664050 | Alcaide et al. | May 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10701661 | Coelho et al. | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall et al. | Aug 2020 | B2 |
10768693 | Powderly et al. | Sep 2020 | B2 |
10861242 | Lacey et al. | Dec 2020 | B2 |
10890967 | Stellmach et al. | Jan 2021 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell et al. | Jul 2021 | B1 |
11079995 | Hulbert et al. | Aug 2021 | B1 |
11082463 | Felman | Aug 2021 | B2 |
11112875 | Zhou et al. | Sep 2021 | B1 |
11175791 | Patnaikuni et al. | Nov 2021 | B1 |
11199898 | Blume et al. | Dec 2021 | B2 |
11200742 | Post et al. | Dec 2021 | B1 |
11232643 | Stevens et al. | Jan 2022 | B1 |
11294472 | Tang et al. | Apr 2022 | B2 |
11294475 | Pinchon et al. | Apr 2022 | B1 |
11307653 | Qian et al. | Apr 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11348300 | Zimmermann et al. | May 2022 | B2 |
11461973 | Pinchon | Oct 2022 | B2 |
11496571 | Berliner et al. | Nov 2022 | B2 |
11573363 | Zou et al. | Feb 2023 | B2 |
11574452 | Berliner et al. | Feb 2023 | B2 |
11720171 | Pastrana Vicente et al. | Aug 2023 | B2 |
11726577 | Katz | Aug 2023 | B2 |
11733824 | Iskandar et al. | Aug 2023 | B2 |
11762457 | Ikkai et al. | Sep 2023 | B1 |
12099653 | Chawda et al. | Sep 2024 | B2 |
12099695 | Smith et al. | Sep 2024 | B1 |
12113948 | Smith et al. | Oct 2024 | B1 |
12118200 | Shutzberg et al. | Oct 2024 | B1 |
20010047250 | Schuller et al. | Nov 2001 | A1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20030151611 | Turpin et al. | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040243926 | Trenbeath et al. | Dec 2004 | A1 |
20050073136 | Larsson et al. | Apr 2005 | A1 |
20050100210 | Rice et al. | May 2005 | A1 |
20050138572 | Good et al. | Jun 2005 | A1 |
20050144570 | Loverin et al. | Jun 2005 | A1 |
20050144571 | Loverin et al. | Jun 2005 | A1 |
20050198143 | Moody et al. | Sep 2005 | A1 |
20050216866 | Rosen et al. | Sep 2005 | A1 |
20060080702 | Diez et al. | Apr 2006 | A1 |
20060283214 | Donadon et al. | Dec 2006 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata et al. | Mar 2009 | A1 |
20090164219 | Yeung et al. | Jun 2009 | A1 |
20090231356 | Barnes et al. | Sep 2009 | A1 |
20100097375 | Tadaishi et al. | Apr 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100188503 | Tsai et al. | Jul 2010 | A1 |
20110018895 | Buzyn et al. | Jan 2011 | A1 |
20110018896 | Buzyn et al. | Jan 2011 | A1 |
20110169927 | Mages et al. | Jul 2011 | A1 |
20110216060 | Weising et al. | Sep 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110310001 | Madau et al. | Dec 2011 | A1 |
20120086624 | Thompson et al. | Apr 2012 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120151416 | Bell et al. | Jun 2012 | A1 |
20120170840 | Caruso et al. | Jul 2012 | A1 |
20120218395 | Andersen et al. | Aug 2012 | A1 |
20120256967 | Baldwin et al. | Oct 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20120272179 | Stafford | Oct 2012 | A1 |
20120290401 | Neven | Nov 2012 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130169533 | Jahnke | Jul 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130222410 | Kameyama et al. | Aug 2013 | A1 |
20130229345 | Day et al. | Sep 2013 | A1 |
20130265227 | Julian | Oct 2013 | A1 |
20130271397 | Hildreth et al. | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130286004 | Mcculloch et al. | Oct 2013 | A1 |
20130326364 | Latta et al. | Dec 2013 | A1 |
20130335301 | Wong et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342570 | Kinnebrew et al. | Dec 2013 | A1 |
20140002338 | Raffa et al. | Jan 2014 | A1 |
20140024324 | Mumick | Jan 2014 | A1 |
20140028548 | Bychkov et al. | Jan 2014 | A1 |
20140075361 | Reynolds et al. | Mar 2014 | A1 |
20140108942 | Freeman et al. | Apr 2014 | A1 |
20140125584 | Xun et al. | May 2014 | A1 |
20140125585 | Song et al. | May 2014 | A1 |
20140164928 | Kim | Jun 2014 | A1 |
20140168267 | Kim et al. | Jun 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140232639 | Hayashi et al. | Aug 2014 | A1 |
20140247208 | Henderek et al. | Sep 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140282272 | Kies et al. | Sep 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane et al. | Nov 2014 | A1 |
20140351753 | Shin et al. | Nov 2014 | A1 |
20140372957 | Keane et al. | Dec 2014 | A1 |
20140375541 | Nister et al. | Dec 2014 | A1 |
20150035822 | Arsan et al. | Feb 2015 | A1 |
20150035832 | Sugden et al. | Feb 2015 | A1 |
20150042679 | Järvenpää | Feb 2015 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150123890 | Kapur et al. | May 2015 | A1 |
20150131850 | Qvarfordt | May 2015 | A1 |
20150135108 | Pope et al. | May 2015 | A1 |
20150169506 | Leventhal et al. | Jun 2015 | A1 |
20150177937 | Poletto et al. | Jun 2015 | A1 |
20150187093 | Chu et al. | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150220152 | Tait et al. | Aug 2015 | A1 |
20150227285 | Lee et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150255067 | White et al. | Sep 2015 | A1 |
20150287403 | Holzer Zaslansky et al. | Oct 2015 | A1 |
20150317832 | Ebstyne et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20150332091 | Kim et al. | Nov 2015 | A1 |
20150350141 | Yang et al. | Dec 2015 | A1 |
20150370323 | Cieplinski et al. | Dec 2015 | A1 |
20160012642 | Lee et al. | Jan 2016 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160026242 | Burns et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160041391 | Van et al. | Feb 2016 | A1 |
20160062636 | Jung et al. | Mar 2016 | A1 |
20160093108 | Mao et al. | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160133052 | Choi et al. | May 2016 | A1 |
20160171304 | Golding et al. | Jun 2016 | A1 |
20160179191 | Kim et al. | Jun 2016 | A1 |
20160179336 | Ambrus et al. | Jun 2016 | A1 |
20160193104 | Du | Jul 2016 | A1 |
20160196692 | Kjallstrom et al. | Jul 2016 | A1 |
20160239165 | Chen et al. | Aug 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano et al. | Sep 2016 | A1 |
20160275702 | Reynolds et al. | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160309081 | Frahm et al. | Oct 2016 | A1 |
20160313890 | Walline et al. | Oct 2016 | A1 |
20160350973 | Shapira et al. | Dec 2016 | A1 |
20160379409 | Gavriliuc et al. | Dec 2016 | A1 |
20170038829 | Lanier et al. | Feb 2017 | A1 |
20170038837 | Faaborg et al. | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier et al. | Feb 2017 | A1 |
20170046872 | Geselowitz et al. | Feb 2017 | A1 |
20170060230 | Faaborg et al. | Mar 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170131964 | Baek et al. | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke et al. | May 2017 | A1 |
20170146801 | Stempora | May 2017 | A1 |
20170148339 | Van Curen et al. | May 2017 | A1 |
20170153866 | Grinberg et al. | Jun 2017 | A1 |
20170206691 | Harrises et al. | Jul 2017 | A1 |
20170212583 | Krasadakis | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170236332 | Kipman et al. | Aug 2017 | A1 |
20170285737 | Khalid et al. | Oct 2017 | A1 |
20170287225 | Powderly et al. | Oct 2017 | A1 |
20170315715 | Fujita et al. | Nov 2017 | A1 |
20170344223 | Holzer et al. | Nov 2017 | A1 |
20170357390 | Alonso Ruiz et al. | Dec 2017 | A1 |
20170358141 | Stafford et al. | Dec 2017 | A1 |
20170364198 | Yoganandan et al. | Dec 2017 | A1 |
20180024681 | Bernstein et al. | Jan 2018 | A1 |
20180045963 | Hoover et al. | Feb 2018 | A1 |
20180075658 | Lanier et al. | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia et al. | Apr 2018 | A1 |
20180095649 | Valdivia et al. | Apr 2018 | A1 |
20180101223 | Ishihara et al. | Apr 2018 | A1 |
20180114364 | Mcphee et al. | Apr 2018 | A1 |
20180150204 | MacGillivray | May 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey et al. | Jun 2018 | A1 |
20180181272 | Olsson et al. | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180210628 | Mcphee et al. | Jul 2018 | A1 |
20180239144 | Woods et al. | Aug 2018 | A1 |
20180275753 | Publicover et al. | Sep 2018 | A1 |
20180288206 | Stimpson et al. | Oct 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov et al. | Nov 2018 | A1 |
20180322701 | Pahud et al. | Nov 2018 | A1 |
20180348861 | Uscinski et al. | Dec 2018 | A1 |
20190012060 | Moore et al. | Jan 2019 | A1 |
20190018498 | West et al. | Jan 2019 | A1 |
20190034076 | Vinayak et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190073109 | Zhang et al. | Mar 2019 | A1 |
20190080572 | Kim et al. | Mar 2019 | A1 |
20190088149 | Fink et al. | Mar 2019 | A1 |
20190094963 | Nijs | Mar 2019 | A1 |
20190094979 | Hall et al. | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad et al. | May 2019 | A1 |
20190130733 | Hodge | May 2019 | A1 |
20190146128 | Cao et al. | May 2019 | A1 |
20190172261 | Alt et al. | Jun 2019 | A1 |
20190204906 | Ross et al. | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190251884 | Burns et al. | Aug 2019 | A1 |
20190258365 | Zurmoehle et al. | Aug 2019 | A1 |
20190279407 | Mchugh et al. | Sep 2019 | A1 |
20190294312 | Rohrbacher | Sep 2019 | A1 |
20190310757 | Lee et al. | Oct 2019 | A1 |
20190324529 | Stellmach et al. | Oct 2019 | A1 |
20190333278 | Palangie et al. | Oct 2019 | A1 |
20190339770 | Kurlethimar et al. | Nov 2019 | A1 |
20190346678 | Nocham | Nov 2019 | A1 |
20190346922 | Young et al. | Nov 2019 | A1 |
20190354259 | Park | Nov 2019 | A1 |
20190361521 | Stellmach et al. | Nov 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20190370492 | Falchuk et al. | Dec 2019 | A1 |
20190371072 | Lindberg et al. | Dec 2019 | A1 |
20190377487 | Bailey et al. | Dec 2019 | A1 |
20190379765 | Fajt et al. | Dec 2019 | A1 |
20190384406 | Smith et al. | Dec 2019 | A1 |
20200004401 | Hwang et al. | Jan 2020 | A1 |
20200012341 | Stellmach et al. | Jan 2020 | A1 |
20200026349 | Fontanel et al. | Jan 2020 | A1 |
20200043243 | Bhushan et al. | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev et al. | Mar 2020 | A1 |
20200098140 | Jagnow et al. | Mar 2020 | A1 |
20200098173 | Mccall | Mar 2020 | A1 |
20200117213 | Tian et al. | Apr 2020 | A1 |
20200126291 | Nguyen et al. | Apr 2020 | A1 |
20200128232 | Hwang et al. | Apr 2020 | A1 |
20200129850 | Ohashi | Apr 2020 | A1 |
20200159017 | Lin et al. | May 2020 | A1 |
20200225735 | Schwarz | Jul 2020 | A1 |
20200225746 | Bar-zeev et al. | Jul 2020 | A1 |
20200225747 | Bar-zeev et al. | Jul 2020 | A1 |
20200225830 | Tang et al. | Jul 2020 | A1 |
20200226814 | Tang et al. | Jul 2020 | A1 |
20200322178 | Wang et al. | Oct 2020 | A1 |
20200322575 | Valli | Oct 2020 | A1 |
20200356221 | Behzadi et al. | Nov 2020 | A1 |
20200357374 | Verweij et al. | Nov 2020 | A1 |
20200371673 | Faulkner | Nov 2020 | A1 |
20200387214 | Ravasz et al. | Dec 2020 | A1 |
20200387228 | Ravasz et al. | Dec 2020 | A1 |
20200387287 | Ravasz et al. | Dec 2020 | A1 |
20200410960 | Saito et al. | Dec 2020 | A1 |
20210074062 | Madonna et al. | Mar 2021 | A1 |
20210090337 | Ravasz et al. | Mar 2021 | A1 |
20210096726 | Faulkner et al. | Apr 2021 | A1 |
20210097776 | Faulkner et al. | Apr 2021 | A1 |
20210125414 | Berkebile | Apr 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210286502 | Lemay et al. | Sep 2021 | A1 |
20210295602 | Scapel et al. | Sep 2021 | A1 |
20210303074 | Vanblon et al. | Sep 2021 | A1 |
20210312684 | Zimmermann et al. | Oct 2021 | A1 |
20210319617 | Ahn et al. | Oct 2021 | A1 |
20210327140 | Rothkopf et al. | Oct 2021 | A1 |
20210339134 | Knoppert | Nov 2021 | A1 |
20210350564 | Peuhkurinen et al. | Nov 2021 | A1 |
20210352172 | Kim et al. | Nov 2021 | A1 |
20210365108 | Burns et al. | Nov 2021 | A1 |
20210368136 | Chalmers et al. | Nov 2021 | A1 |
20210375022 | Lee et al. | Dec 2021 | A1 |
20220011855 | Hazra et al. | Jan 2022 | A1 |
20220012002 | Bar-zeev et al. | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220070241 | Yerli | Mar 2022 | A1 |
20220083197 | Rockel et al. | Mar 2022 | A1 |
20220092862 | Faulkner et al. | Mar 2022 | A1 |
20220100270 | Pastrana Vicente et al. | Mar 2022 | A1 |
20220101593 | Rockel et al. | Mar 2022 | A1 |
20220101612 | Palangie et al. | Mar 2022 | A1 |
20220104910 | Shelton et al. | Apr 2022 | A1 |
20220121275 | Balaji et al. | Apr 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220130107 | Lindh | Apr 2022 | A1 |
20220137705 | Hashimoto et al. | May 2022 | A1 |
20220155853 | Fan et al. | May 2022 | A1 |
20220155909 | Kawashima et al. | May 2022 | A1 |
20220157083 | Jandhyala et al. | May 2022 | A1 |
20220187907 | Lee et al. | Jun 2022 | A1 |
20220191570 | Reid et al. | Jun 2022 | A1 |
20220197403 | Hughes et al. | Jun 2022 | A1 |
20220229524 | Mckenzie et al. | Jul 2022 | A1 |
20220229534 | Terre et al. | Jul 2022 | A1 |
20220232191 | Kawakami et al. | Jul 2022 | A1 |
20220245888 | Singh et al. | Aug 2022 | A1 |
20220253136 | Holder et al. | Aug 2022 | A1 |
20220253149 | Berliner et al. | Aug 2022 | A1 |
20220253194 | Berliner et al. | Aug 2022 | A1 |
20220255995 | Berliner et al. | Aug 2022 | A1 |
20220276720 | Yasui | Sep 2022 | A1 |
20220319453 | Llull et al. | Oct 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220350463 | Walkin et al. | Nov 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20220414999 | Ravasz et al. | Dec 2022 | A1 |
20230004216 | Rodgers et al. | Jan 2023 | A1 |
20230008537 | Henderson et al. | Jan 2023 | A1 |
20230021861 | Fujiwara et al. | Jan 2023 | A1 |
20230068660 | Brent et al. | Mar 2023 | A1 |
20230069764 | Jonker et al. | Mar 2023 | A1 |
20230074080 | Miller et al. | Mar 2023 | A1 |
20230086766 | Olwal et al. | Mar 2023 | A1 |
20230092282 | Boesel et al. | Mar 2023 | A1 |
20230093979 | Stauber et al. | Mar 2023 | A1 |
20230094522 | Stauber et al. | Mar 2023 | A1 |
20230100689 | Chiu et al. | Mar 2023 | A1 |
20230133579 | Chang et al. | May 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230154122 | Dascola et al. | May 2023 | A1 |
20230163987 | Young et al. | May 2023 | A1 |
20230168788 | Faulkner et al. | Jun 2023 | A1 |
20230185426 | Rockel et al. | Jun 2023 | A1 |
20230186577 | Rockel et al. | Jun 2023 | A1 |
20230206921 | Edelsburg et al. | Jun 2023 | A1 |
20230244857 | Weiss et al. | Aug 2023 | A1 |
20230273706 | Smith et al. | Aug 2023 | A1 |
20230308610 | Henderson et al. | Sep 2023 | A1 |
20230315270 | Hylak et al. | Oct 2023 | A1 |
20230315385 | Akmal et al. | Oct 2023 | A1 |
20230316634 | Chiu et al. | Oct 2023 | A1 |
20230316658 | Smith et al. | Oct 2023 | A1 |
20230325004 | Burns et al. | Oct 2023 | A1 |
20230333646 | Pastrana Vicente et al. | Oct 2023 | A1 |
20230350539 | Owen et al. | Nov 2023 | A1 |
20230359199 | Adachi et al. | Nov 2023 | A1 |
20230384907 | Boesel et al. | Nov 2023 | A1 |
20230388357 | Faulkner et al. | Nov 2023 | A1 |
20240086031 | Palangie et al. | Mar 2024 | A1 |
20240086032 | Palangie et al. | Mar 2024 | A1 |
20240087256 | Hylak et al. | Mar 2024 | A1 |
20240094863 | Smith et al. | Mar 2024 | A1 |
20240094882 | Brewer et al. | Mar 2024 | A1 |
20240095984 | Ren et al. | Mar 2024 | A1 |
20240103613 | Chawda et al. | Mar 2024 | A1 |
20240103676 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103684 | Yu et al. | Mar 2024 | A1 |
20240103687 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103701 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103704 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103707 | Henderson et al. | Mar 2024 | A1 |
20240103716 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103803 | Krivoruchko et al. | Mar 2024 | A1 |
20240104836 | Dessero et al. | Mar 2024 | A1 |
20240104873 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240104877 | Henderson et al. | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
20240119682 | Rudman et al. | Apr 2024 | A1 |
20240221291 | Henderson et al. | Jul 2024 | A1 |
20240256032 | Holder et al. | Aug 2024 | A1 |
20240272782 | Pastrana Vicente et al. | Aug 2024 | A1 |
20240291953 | Cerra et al. | Aug 2024 | A1 |
20240302948 | Hylak et al. | Sep 2024 | A1 |
20240310971 | Kawashima et al. | Sep 2024 | A1 |
20240361835 | Hylak et al. | Oct 2024 | A1 |
20240361901 | Ravasz et al. | Oct 2024 | A1 |
20240393876 | Chawda et al. | Nov 2024 | A1 |
20240402800 | Shutzberg et al. | Dec 2024 | A1 |
20240402821 | Meyer et al. | Dec 2024 | A1 |
20240404206 | Chiu et al. | Dec 2024 | A1 |
20240411444 | Shutzberg et al. | Dec 2024 | A1 |
20240420435 | Gitter et al. | Dec 2024 | A1 |
20240428488 | Ren et al. | Dec 2024 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
106575149 | Apr 2017 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110543230 | Dec 2019 | CN |
110673718 | Jan 2020 | CN |
111641843 | Sep 2020 | CN |
109491508 | Aug 2022 | CN |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3088997 | Nov 2016 | EP |
3249497 | Nov 2017 | EP |
3316075 | May 2018 | EP |
3451135 | Mar 2019 | EP |
3503101 | Jun 2019 | EP |
3570144 | Nov 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H10-51711 | Feb 1998 | JP |
H1078845 | Mar 1998 | JP |
2005-215144 | Aug 2005 | JP |
2005333524 | Dec 2005 | JP |
2006-146803 | Jun 2006 | JP |
2006-295236 | Oct 2006 | JP |
2012-234550 | Nov 2012 | JP |
2013-196158 | Sep 2013 | JP |
2013-254358 | Dec 2013 | JP |
2013-257716 | Dec 2013 | JP |
2014-059840 | Apr 2014 | JP |
2014071663 | Apr 2014 | JP |
2014-099184 | May 2014 | JP |
2014-514652 | Jun 2014 | JP |
2015-515040 | May 2015 | JP |
2015-118332 | Jun 2015 | JP |
2016-096513 | May 2016 | JP |
2016-194744 | Nov 2016 | JP |
2017-027206 | Feb 2017 | JP |
2017-58528 | Mar 2017 | JP |
2018-005516 | Jan 2018 | JP |
2018-5517 | Jan 2018 | JP |
2018-106499 | Jul 2018 | JP |
6438869 | Dec 2018 | JP |
2019-169154 | Oct 2019 | JP |
2019-175449 | Oct 2019 | JP |
2022-053334 | Apr 2022 | JP |
2024503899 | Jan 2024 | JP |
20110017236 | Feb 2011 | KR |
10-2016-0012139 | Feb 2016 | KR |
10-2019-0100957 | Aug 2019 | KR |
2010026519 | Mar 2010 | WO |
2012145180 | Oct 2012 | WO |
2015130150 | Sep 2015 | WO |
2015192117 | Dec 2015 | WO |
2017088487 | Jun 2017 | WO |
2018175735 | Sep 2018 | WO |
2019067902 | Apr 2019 | WO |
2019142560 | Jul 2019 | WO |
2019217163 | Nov 2019 | WO |
2020066682 | Apr 2020 | WO |
2020247256 | Dec 2020 | WO |
2021173839 | Sep 2021 | WO |
2021202783 | Oct 2021 | WO |
2022046340 | Mar 2022 | WO |
2022055822 | Mar 2022 | WO |
2022066399 | Mar 2022 | WO |
2022066535 | Mar 2022 | WO |
2022146936 | Jul 2022 | WO |
2022146938 | Jul 2022 | WO |
2022147146 | Jul 2022 | WO |
2022164881 | Aug 2022 | WO |
2022225795 | Oct 2022 | WO |
2023096940 | Jun 2023 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
Corrected Notice of Allowability received for U.S. Appl. No. 18/154,757, mailed on Aug. 30, 2024, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/421,827, mailed on Aug. 29, 2024, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/463,739, mailed on Oct. 4, 2024, 2 pages. |
European Search Report received for European Patent Application No. 21801378.7, mailed on Jul. 10, 2024, 5 pages. |
Extended European Search Report received for European Patent Application No. 24159868.9, mailed on Oct. 9, 2024, 13 pages. |
Extended European Search Report received for European Patent Application No. 24179830.5, mailed on Nov. 5, 2024, 11 Pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 31, 2024, 34 pages. |
Final Office Action received for U.S. Appl. No. 18/375,280, mailed on Jul. 12, 2024, 19 pages. |
International Search Report received for PCT Patent Application No. PCT/US2024/013602, mailed on Apr. 29, 2024, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US2024/026102, mailed on Aug. 26, 2024, 5 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Aug. 15, 2024, 28 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/327,318, mailed on Sep. 16, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Aug. 26, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Jul. 12, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,999, mailed on Sep. 12, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Jul. 31, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,827, mailed on Aug. 14, 2024, 10 pages. |
AquaSnap Window Manager: dock, snap, tile, organize [online], Nurgo Software, Available online at: <https://www.nurgo-software.com/products/aquasnap>, [retrieved on Jun. 27, 2023], 5 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, mailed on Apr. 24, 2024, 4 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/479,791, mailed on May 19, 2023, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/659,147, mailed on Feb. 14, 2024, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, mailed on Oct. 12, 2023, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/465,098, mailed on Mar. 13, 2024, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Dec. 21, 2022, 2 pages. |
European Search Report received for European Patent Application No. 21791153.6, mailed on Mar. 22, 2024, 5 pages. |
Extended European Search Report received for European Patent Application No. 23158818.7, mailed on Jul. 3, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23158929.2, mailed on Jun. 27, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23197572.3, mailed on Feb. 19, 2024, 7 pages. |
Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Mar. 16, 2023, 24 pages. |
Final Office Action received for U.S. Appl. No. 17/580,495, mailed on May 13, 2024, 29 pages. |
Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Oct. 4, 2023, 17 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Feb. 16, 2024, 32 pages. |
Home | Virtual Desktop [online], Virtual Desktop, Available online at: <https://www.vrdesktop.net>, [retrieved on Jun. 29, 2023], 4 pages. |
International Search Report for PCT Application No. PCT/US2022/076608, mailed Feb. 24, 2023, 8 pages. |
International Search Report received for PCT Application No. PCT/US2022/076603, mailed on Jan. 9, 2023, 4 pages. |
International Search Report received for PCT Application No. PCT/US2022/076719, mailed on Mar. 3, 2023, 8 pages. |
International Search Report received for PCT Application No. PCT/US2023/017335, mailed on Aug. 22, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/018213, mailed on Jul. 26, 2023, 6 pages. |
Search Report received for PCT Application No. PCT/US2023/019458, mailed on Aug. 8, 2023, 7 pages. |
Search Report received for PCT Application No. PCT/US2023/060943, mailed on Jun. 6, 2023, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/049130, mailed on Dec. 7, 2021, 4 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/049131, mailed on Dec. 21, 2021, 4 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/050948, mailed on Mar. 4, 2022, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/065240, mailed on May 23, 2022, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/071518, mailed on Feb. 25, 2022, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/071595, mailed on Mar. 17, 2022, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/071596, mailed on Apr. 8, 2022, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2022/013208, mailed on Apr. 26, 2022, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2022/071704, mailed on Aug. 26, 2022, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2023/074257, mailed on Nov. 21, 2023, 5 pages. |
Search Report received for PCT Patent Application No. PCT/US2023/074950, mailed on Jan. 3, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074979, mailed on Feb. 26, 2024, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/065242, mailed on Apr. 4, 2022, 3 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Oct. 6, 2022, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Sep. 29, 2023, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/479,791, mailed on May 11, 2022, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Dec. 11, 2023, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Mar. 16, 2023, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,655, mailed on Apr. 20, 2023, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,999, mailed on Feb. 23, 2024, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/157,040, mailed on May 2, 2024, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on May 29, 2024, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 26, 2023, 29 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/305,201, mailed on May 23, 2024, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/336,770, mailed on Jun. 5, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Apr. 17, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Apr. 7, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Jul. 20, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Aug. 31, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/479,791, mailed on Mar. 13, 2023, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/479,791, mailed on Nov. 17, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Jun. 6, 2023, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Nov. 30, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/650,775, mailed on Jan. 25, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/650,775, mailed on Sep. 18, 2023, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on Jan. 26, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on May 29, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Jan. 24, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Sep. 29, 2023, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/933,707, mailed on Jun. 28, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/933,707, mailed on Mar. 6, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on May 10, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/182,304, mailed on Jan. 24, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/182,304, mailed on Oct. 2, 2023, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Apr. 11, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/423,187, mailed on Jun. 5, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Feb. 1, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Jun. 17, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Oct. 30, 2023, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Jun. 20, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Mar. 4, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Nov. 17, 2023, 8 pages. |
Restriction Requirement received for U.S. Appl. No. 17/932,999, mailed on Oct. 3, 2023, 6 pages. |
Search Report received for Chinese Patent Application No. 202310873465.7, mailed on Feb. 1, 2024, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Bhowmick Shimmila, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments”, Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, Nov. 2018, 25 pages. |
Bolt et al., “Two-Handed Gesture in Multi-Modal Natural Dialog”, Uist '92, 5th Annual Symposium on User Interface Software and Technology. Proceedings of the ACM Symposium on User Interface Software and Technology, Monterey, Nov. 15-18, 1992, pp. 7-14. |
Brennan Dominic, “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared”, [online]. Road to VR, Available online at: <https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/>, [retrieved on Jun. 29, 2023], Jan. 3, 2018, 4 pages. |
Camalich Sergio, “CSS Buttons with Pseudo-elements”, Available online at: <https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/>, [retrieved on Jul. 12, 2017], Jan. 11, 2012, 8 pages. |
Chatterjee et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions”, ICMI '15, Nov. 9-13, 2015, 8 pages. |
Lin et al., “Towards Naturally Grabbing and Moving Objects in VR”, IS&T International Symposium on Electronic Imaging and The Engineering Reality of Virtual Reality, 2016, 6 pages. |
McGill et al., “Expanding the Bounds of Seated Virtual Workspaces”, University of Glasgow, Available online at: <https://core.ac.uk/download/pdf/323988271.pdf>, [retrieved on Jun. 27, 2023], Jun. 5, 2020, 44 pages. |
Pfeuffer et al., “Gaze + Pinch Interaction in Virtual Reality”, In Proceedings of SUI '17, Brighton, United Kingdom, Oct. 16-17, 2017, pp. 99-108. |
Simple Modal Window With Background Blur Effect, Available online at: <http://web.archive.org/web/20160313233427/https://www.cssscript.com/simple-modal-window-with-background-blur-effect/>, Mar. 13, 2016, 5 pages. |
Yamada Yoshihiro, “How to Generate a Modal Window with ModalPopup Control”, Available online at: <http://web.archive.org/web/20210920015801/https://atmarkit.itmedia.co.jp/fdotnet/dotnettips/580aspajaxmodalpopup/aspajaxmodalpopup.html>[Search Date Aug. 22, 2023], Sep. 20, 2021, 8 pages (1 page of English Abstract and 7 pages of Official Copy). |
Corrected Notice of Allowability received for U.S. Appl. No. 17/935,095, mailed on Oct. 18, 2024, 3 pages. |
Extended European Search Report received for European Patent Application No. 24178730.8, mailed on Oct. 14, 2024, 8 pages. |
Extended European Search Report received for European Patent Application No. 24178752.2, mailed on Oct. 4, 2024, 8 pages. |
Extended European Search Report received for European Patent Application No. 24179233.2, mailed on Oct. 2, 2024, 10 pages. |
Final Office Action received for U.S. Appl. No. 17/202,034, mailed on Nov. 4, 2024, 50 pages. |
Final Office Action received for U.S. Appl. No. 17/935,095, mailed on Dec. 29, 2023, 15 pages. |
Final Office Action received for U.S. Appl. No. 18/157,040, mailed on Dec. 2, 2024, 25 pages. |
Final Office Action received for U.S. Appl. No. 18/473,196, mailed on Dec. 6, 2024, 22 pages. |
International Search Report received for PCT Application No. PCT/US2023/074962, mailed on Jan. 19, 2024, 9 pages. |
International Search Report received for PCT Application No. PCT/US2024/030107, mailed on Oct. 23, 2024, 9 pages. |
International Search Report received for PCT Application No. PCT/US2024/032314, mailed on Nov. 11, 2024, 6 pages. |
International Search Report received for PCT Application No. PCT/US2024/032451, mailed on Nov. 15, 2024, 6 pages. |
International Search Report received for PCT Application No. PCT/US2024/032456, mailed on Nov. 14, 2024, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/076985, mailed on Feb. 20, 2023, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074793, mailed on Feb. 6, 2024, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/935,095, mailed on Jun. 22, 2023, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/322,469, mailed on Nov. 15, 2024, 34 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/375,280, mailed on Nov. 27, 2024, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/473,196, mailed on Aug. 16, 2024, 21 pages. |
Notice of Allowance received for U.S. Appl. No. 17/935,095, mailed on Jul. 3, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,697, mailed on Aug. 6, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,697, mailed on Dec. 3, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 18/336,770, mailed on Nov. 29, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/515,188, mailed on Nov. 27, 2024, 9 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 18/515,188, mailed on Dec. 12, 2024, 2 pages. |
Pfeuffer, et al., “Gaze and Touch Interaction on Tablets”, UIST '16, Tokyo, Japan, ACM, Oct. 16-19, 2016, pp. 301-311. |
Schenk, et al., “SPOCK: A Smooth Pursuit Oculomotor Control Kit”, CHI'16 Extended Abstracts, San Jose, CA, USA, ACM, May 7-12, 2016, pp. 2681-2687. |
Number | Date | Country | |
---|---|---|---|
20230274504 A1 | Aug 2023 | US |
Number | Date | Country | |
---|---|---|---|
63268679 | Feb 2022 | US |