This relates generally to computer systems with a display generation component and one or more input devices that present graphical user interfaces, including but not limited to electronic devices that present three-dimensional environments, via the display generation component, that include virtual objects.
The development of computer systems for augmented reality has increased significantly in recent years. Example augmented reality environments include at least some virtual elements that replace or augment the physical world. Input devices, such as cameras, controllers, joysticks, touch-sensitive surfaces, and touch-screen displays for computer systems and other electronic computing devices are used to interact with virtual/augmented reality environments. Example virtual elements include virtual objects include digital images, video, text, icons, and control elements such as buttons and other graphics.
But methods and interfaces for interacting with environments that include at least some virtual elements (e.g., applications, augmented reality environments, mixed reality environments, and virtual reality environments) are cumbersome, inefficient, and limited. For example, systems that provide insufficient feedback for performing actions associated with virtual objects, systems that require a series of inputs to achieve a desired outcome in an augmented reality environment, and systems in which manipulation of virtual objects are complex, tedious and error-prone, create a significant cognitive burden on a user, and detract from the experience with the virtual/augmented reality environment. In addition, these methods take longer than necessary, thereby wasting energy. This latter consideration is particularly important in battery-operated devices.
Accordingly, there is a need for computer systems with improved methods and interfaces for providing computer generated experiences to users that make interaction with the computer systems more efficient and intuitive for a user. Such methods and interfaces optionally complement or replace conventional methods for providing computer generated reality experiences to users. Such methods and interfaces reduce the number, extent, and/or nature of the inputs from a user by helping the user to understand the connection between provided inputs and device responses to the inputs, thereby creating a more efficient human-machine interface.
The above deficiencies and other problems associated with user interfaces for computer systems with a display generation component and one or more input devices are reduced or eliminated by the disclosed systems. In some embodiments, the computer system is a desktop computer with an associated display. In some embodiments, the computer system is portable device (e.g., a notebook computer, tablet computer, or handheld device). In some embodiments, the computer system is a personal electronic device (e.g., a wearable electronic device, such as a watch, or a head-mounted device). In some embodiments, the computer system has a touchpad. In some embodiments, the computer system has one or more cameras. In some embodiments, the computer system has a touch-sensitive display (also known as a “touch screen” or “touch-screen display”). In some embodiments, the computer system has one or more eye-tracking components. In some embodiments, the computer system has one or more hand-tracking components. In some embodiments, the computer system has one or more output devices in addition to the display generation component, the output devices including one or more tactile output generators and one or more audio output devices. In some embodiments, the computer system has a graphical user interface (GUI), one or more processors, memory and one or more modules, programs or sets of instructions stored in the memory for performing multiple functions. In some embodiments, the user interacts with the GUI through stylus and/or finger contacts and gestures on the touch-sensitive surface, movement of the user's eyes and hand in space relative to the GUI or the user's body as captured by cameras and other movement sensors, and voice inputs as captured by one or more audio input devices. In some embodiments, the functions performed through the interactions optionally include image editing, drawing, presenting, word processing, spreadsheet making, game playing, telephoning, video conferencing, e-mailing, instant messaging, workout support, digital photographing, digital videoing, web browsing, digital music playing, note taking, and/or digital video playing. Executable instructions for performing these functions are, optionally, included in a non-transitory computer readable storage medium or other computer program product configured for execution by one or more processors.
There is a need for electronic devices with improved methods and interfaces for adjusting and/or controlling immersion associated with user interfaces. Such methods and interfaces may complement or replace conventional methods for displaying user interfaces in a three-dimensional environment. Such methods and interfaces reduce the number, extent, and/or the nature of the inputs from a user and produce a more efficient human-machine interface.
In some embodiments, an electronic device emphasizes and/or deemphasizes user interfaces based on the gaze of a user. In some embodiments, an electronic device defines levels of immersion for different user interfaces independently of one another. In some embodiments, an electronic device resumes display of a user interface at a previously-displayed level of immersion after (e.g., temporarily) reducing the level of immersion associated with the user interface. In some embodiments, an electronic device allows objects, people, and/or portions of an environment to be visible through a user interface displayed by the electronic device. In some embodiments, an electronic device reduces the level of immersion associated with a user interface based on characteristics of the electronic device and/or physical environment of the electronic device.
Note that the various embodiments described above can be combined with any other embodiments described herein. The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
For a better understanding of the various described embodiments, reference should be made to the Description of Embodiments below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
The present disclosure relates to user interfaces for providing a computer generated reality (CGR) experience to a user, in accordance with some embodiments.
The systems, methods, and GUIs described herein provide improved ways for an electronic device to adjust and/or control the level of immersion associated with user interfaces.
In some embodiments, a computer system deemphasizes a second user interface with respect to a first user interface when the system detects that the gaze of a user is directed to the first user interface. In some embodiments, the system performs such deemphasizing when the first user interface is a user interface of a particular type of application (e.g., a media player application). In some embodiments, the second user interface includes representations of one or more of virtual elements displayed by the system or portions of a physical environment of the system. Deemphasizing the second user interface allows the user to focus on the first user interface with less distraction from content outside of the first user interface.
In some embodiments, a computer system defines levels of immersion for different user interfaces independently of one another. Changes in the level of immersion with which the system displays a first user interface (e.g., of an operating system, of a first application) optionally does not affect the level of immersion with which the system displays a second user interface (e.g., of a second application). In some embodiments, immersion is controlled via manipulation of a mechanical input element (e.g., rotatable input element) associated with the computer system, where the direction and/or magnitude of the input at the mechanical input element defines the magnitude and/or direction of the change in the level of immersion. The level of immersion optionally defines the degree to which content other than the user interface in question (e.g., representations of the physical environment of the system, virtual elements outside of the user interface, etc.) is visible via the display. Providing for independently controlled levels of immersion, and/or doing so in accordance with a magnitude and/or direction of input, provides the user with consistent and expected display behavior for various user interfaces, and reduces errors of interaction with such user interfaces as a result.
In some embodiments, a computer system resumes display of a user interface at a previously-displayed level of immersion after (e.g., temporarily) reducing the level of immersion associated with the user interface. The computer system optionally detects an event for reducing the level of immersion at which a respective user interface is displayed, and reduces the level of immersion in response to the event. Subsequently, in response to detecting an event corresponding to a request to redisplay the respective user interface at the previously-displayed level of immersion, the system optionally resumes display of the respective user interface at the previously-displayed level of immersion. In some embodiments, the event to reduce the level of immersion includes detecting a press input on a mechanical input element used to control immersion, and the event to resume the previous level of immersion includes detecting release of the mechanical input element used to control immersion. Resuming display of a user interface at its previous level of immersion provides a quick and efficient manner of returning to a previously in-effect level of immersion, without requiring user input defining the particular level of immersion to which to return, which also avoids erroneous user inputs that define erroneous levels of immersion to which to return.
In some embodiments, a computer system allows objects, people, and/or portions of an environment to be visible through a user interface displayed by the system. Representations of people in the environment of the system are optionally made visible through the user interface based on their distance from the user and/or their attention (e.g., whether it is directed to the user). Representations of objects in the environment of the system are optionally made visible through the user interface based on their distance from the user and/or their determined risk level towards the user (e.g., whether or not the object(s) pose a risk to the user). Making representations of the physical environment of the system visible through the user interface helps users avoid danger in their physical environment, and facilitates interaction with people in their environment without requiring separate input from the user to do so.
In some embodiments, a computer system reduces the level of immersion associated with a user interface based on characteristics of the system and/or physical environment of the system. If the computer system determines that it is moving at a speed greater than a speed threshold, the system optionally reduces the level of immersion at which it is displaying user interface(s) so the user of the system is able to view the physical environment via the system. If the computer system determines that a sound associated with potential danger is detected in the environment of the system, the system optionally reduces the level of immersion at which it is displaying user interface(s) so the user of the system is able to view the physical environment via the system. Reducing the level of immersion as described provides a quick and efficient manner of allowing the user of the system to see the physical environment, without requiring separate input from the user to do so.
When describing a CGR experience, various terms are used to differentially refer to several related but distinct environments that the user may sense and/or with which a user may interact (e.g., with inputs detected by a computer system 101 generating the CGR experience that cause the computer system generating the CGR experience to generate audio, visual, and/or tactile feedback corresponding to various inputs provided to the computer system 101). The following is a subset of these terms:
Physical environment: A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic systems. Physical environments, such as a physical park, include physical articles, such as physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment, such as through sight, touch, hearing, taste, and smell.
Computer-generated reality: In contrast, a computer-generated reality (CGR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic system. In CGR, a subset of a person's physical motions, or representations thereof, are tracked, and, in response, one or more characteristics of one or more virtual objects simulated in the CGR environment are adjusted in a manner that comports with at least one law of physics. For example, a CGR system may detect a person's head turning and, in response, adjust graphical content and an acoustic field presented to the person in a manner similar to how such views and sounds would change in a physical environment. In some situations (e.g., for accessibility reasons), adjustments to characteristic(s) of virtual object(s) in a CGR environment may be made in response to representations of physical motions (e.g., vocal commands). A person may sense and/or interact with a CGR object using any one of their senses, including sight, sound, touch, taste, and smell. For example, a person may sense and/or interact with audio objects that create 3D or spatial audio environment that provides the perception of point audio sources in 3D space. In another example, audio objects may enable audio transparency, which selectively incorporates ambient sounds from the physical environment with or without computer-generated audio. In some CGR environments, a person may sense and/or interact only with audio objects.
Examples of CGR include virtual reality and mixed reality.
Virtual reality: A virtual reality (VR) environment refers to a simulated environment that is designed to be based entirely on computer-generated sensory inputs for one or more senses. A VR environment comprises a plurality of virtual objects with which a person may sense and/or interact. For example, computer-generated imagery of trees, buildings, and avatars representing people are examples of virtual objects. A person may sense and/or interact with virtual objects in the VR environment through a simulation of the person's presence within the computer-generated environment, and/or through a simulation of a subset of the person's physical movements within the computer-generated environment.
Mixed reality: In contrast to a VR environment, which is designed to be based entirely on computer-generated sensory inputs, a mixed reality (MR) environment refers to a simulated environment that is designed to incorporate sensory inputs from the physical environment, or a representation thereof, in addition to including computer-generated sensory inputs (e.g., virtual objects). On a virtuality continuum, a mixed reality environment is anywhere between, but not including, a wholly physical environment at one end and virtual reality environment at the other end. In some MR environments, computer-generated sensory inputs may respond to changes in sensory inputs from the physical environment. Also, some electronic systems for presenting an MR environment may track location and/or orientation with respect to the physical environment to enable virtual objects to interact with real objects (that is, physical articles from the physical environment or representations thereof). For example, a system may account for movements so that a virtual tree appears stationery with respect to the physical ground.
Examples of mixed realities include augmented reality and augmented virtuality. Augmented reality: An augmented reality (AR) environment refers to a simulated environment in which one or more virtual objects are superimposed over a physical environment, or a representation thereof. For example, an electronic system for presenting an AR environment may have a transparent or translucent display through which a person may directly view the physical environment. The system may be configured to present virtual objects on the transparent or translucent display, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. Alternatively, a system may have an opaque display and one or more imaging sensors that capture images or video of the physical environment, which are representations of the physical environment. The system composites the images or video with virtual objects, and presents the composition on the opaque display. A person, using the system, indirectly views the physical environment by way of the images or video of the physical environment, and perceives the virtual objects superimposed over the physical environment. As used herein, a video of the physical environment shown on an opaque display is called “pass-through video,” meaning a system uses one or more image sensor(s) to capture images of the physical environment, and uses those images in presenting the AR environment on the opaque display. Further alternatively, a system may have a projection system that projects virtual objects into the physical environment, for example, as a hologram or on a physical surface, so that a person, using the system, perceives the virtual objects superimposed over the physical environment. An augmented reality environment also refers to a simulated environment in which a representation of a physical environment is transformed by computer-generated sensory information. For example, in providing pass-through video, a system may transform one or more sensor images to impose a select perspective (e.g., viewpoint) different than the perspective captured by the imaging sensors. As another example, a representation of a physical environment may be transformed by graphically modifying (e.g., enlarging) portions thereof, such that the modified portion may be representative but not photorealistic versions of the originally captured images. As a further example, a representation of a physical environment may be transformed by graphically eliminating or obfuscating portions thereof.
Augmented virtuality: An augmented virtuality (AV) environment refers to a simulated environment in which a virtual or computer generated environment incorporates one or more sensory inputs from the physical environment. The sensory inputs may be representations of one or more characteristics of the physical environment. For example, an AV park may have virtual trees and virtual buildings, but people with faces photorealistically reproduced from images taken of physical people. As another example, a virtual object may adopt a shape or color of a physical article imaged by one or more imaging sensors. As a further example, a virtual object may adopt shadows consistent with the position of the sun in the physical environment.
Hardware: There are many different types of electronic systems that enable a person to sense and/or interact with various CGR environments. Examples include head mounted systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mounted system may have one or more speaker(s) and an integrated opaque display. Alternatively, a head mounted system may be configured to accept an external opaque display (e.g., a smartphone). The head mounted system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mounted system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In one embodiment, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface. In some embodiments, the controller 110 is configured to manage and coordinate a CGR experience for the user. In some embodiments, the controller 110 includes a suitable combination of software, firmware, and/or hardware. The controller 110 is described in greater detail below with respect to
In some embodiments, the display generation component 120 is configured to provide the CGR experience (e.g., at least a visual component of the CGR experience) to the user. In some embodiments, the display generation component 120 includes a suitable combination of software, firmware, and/or hardware. The display generation component 120 is described in greater detail below with respect to
According to some embodiments, the display generation component 120 provides a CGR experience to the user while the user is virtually and/or physically present within the scene 105.
In some embodiments, the display generation component is worn on a part of the user's body (e.g., on his/her head, on his/her hand, etc.). As such, the display generation component 120 includes one or more CGR displays provided to display the CGR content. For example, in various embodiments, the display generation component 120 encloses the field-of-view of the user. In some embodiments, the display generation component 120 is a handheld device (such as a smartphone or tablet) configured to present CGR content, and the user holds the device with a display directed towards the field-of-view of the user and a camera directed towards the scene 105. In some embodiments, the handheld device is optionally placed within an enclosure that is worn on the head of the user. In some embodiments, the handheld device is optionally placed on a support (e.g., a tripod) in front of the user. In some embodiments, the display generation component 120 is a CGR chamber, enclosure, or room configured to present CGR content in which the user does not wear or hold the display generation component 120. Many user interfaces described with reference to one type of hardware for displaying CGR content (e.g., a handheld device or a device on a tripod) could be implemented on another type of hardware for displaying CGR content (e.g., an HMD or other wearable computing device). For example, a user interface showing interactions with CGR content triggered based on interactions that happen in a space in front of a handheld or tripod mounted device could similarly be implemented with an HMD where the interactions happen in a space in front of the HMD and the responses of the CGR content are displayed via the HMD. Similarly, a user interface showing interactions with CRG content triggered based on movement of a handheld or tripod mounted device relative to the physical environment (e.g., the scene 105 or a part of the user's body (e.g., the user's eye(s), head, or hand)) could similarly be implemented with an HMD where the movement is caused by movement of the HMD relative to the physical environment (e.g., the scene 105 or a part of the user's body (e.g., the user's eye(s), head, or hand)).
While pertinent features of the operation environment 100 are shown in
In some embodiments, the one or more communication buses 204 include circuitry that interconnects and controls communications between system components. In some embodiments, the one or more I/O devices 206 include at least one of a keyboard, a mouse, a touchpad, a joystick, one or more microphones, one or more speakers, one or more image sensors, one or more displays, and/or the like.
The memory 220 includes high-speed random-access memory, such as dynamic random-access memory (DRAM), static random-access memory (SRAM), double-data-rate random-access memory (DDR RAM), or other random-access solid-state memory devices. In some embodiments, the memory 220 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 220 optionally includes one or more storage devices remotely located from the one or more processing units 202. The memory 220 comprises a non-transitory computer readable storage medium. In some embodiments, the memory 220 or the non-transitory computer readable storage medium of the memory 220 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 230 and a CGR experience module 240.
The operating system 230 includes instructions for handling various basic system services and for performing hardware dependent tasks. In some embodiments, the CGR experience module 240 is configured to manage and coordinate one or more CGR experiences for one or more users (e.g., a single CGR experience for one or more users, or multiple CGR experiences for respective groups of one or more users). To that end, in various embodiments, the CGR experience module 240 includes a data obtaining unit 242, a tracking unit 244, a coordination unit 246, and a data transmitting unit 248.
In some embodiments, the data obtaining unit 242 is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the display generation component 120 of
In some embodiments, the tracking unit 244 is configured to map the scene 105 and to track the position/location of at least the display generation component 120 with respect to the scene 105 of
In some embodiments, the coordination unit 246 is configured to manage and coordinate the CGR experience presented to the user by the display generation component 120, and optionally, by one or more of the output devices 155 and/or peripheral devices 195. To that end, in various embodiments, the coordination unit 246 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some embodiments, the data transmitting unit 248 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the display generation component 120, and optionally, to one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195. To that end, in various embodiments, the data transmitting unit 248 includes instructions and/or logic therefor, and heuristics and metadata therefor.
Although the data obtaining unit 242, the tracking unit 244 (e.g., including the eye tracking unit 243 and the hand tracking unit 244), the coordination unit 246, and the data transmitting unit 248 are shown as residing on a single device (e.g., the controller 110), it should be understood that in other embodiments, any combination of the data obtaining unit 242, the tracking unit 244 (e.g., including the eye tracking unit 243 and the hand tracking unit 244), the coordination unit 246, and the data transmitting unit 248 may be located in separate computing devices.
Moreover,
In some embodiments, the one or more communication buses 304 include circuitry that interconnects and controls communications between system components. In some embodiments, the one or more I/O devices and sensors 306 include at least one of an inertial measurement unit (IMU), an accelerometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like.
In some embodiments, the one or more CGR displays 312 are configured to provide the CGR experience to the user. In some embodiments, the one or more CGR displays 312 correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electro-mechanical system (MEMS), and/or the like display types. In some embodiments, the one or more CGR displays 312 correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. For example, the HMD 120 includes a single CGR display. In another example, the HMD 120 includes a CGR display for each eye of the user. In some embodiments, the one or more CGR displays 312 are capable of presenting MR and VR content. In some embodiments, the one or more CGR displays 312 are capable of presenting MR or VR content.
In some embodiments, the one or more image sensors 314 are configured to obtain image data that corresponds to at least a portion of the face of the user that includes the eyes of the user (and may be referred to as an eye-tracking camera). In some embodiments, the one or more image sensors 314 are configured to obtain image data that corresponds to at least a portion of the user's hand(s) and optionally arm(s) of the user (and may be referred to as a hand-tracking camera). In some embodiments, the one or more image sensors 314 are configured to be forward-facing so as to obtain image data that corresponds to the scene as would be viewed by the user if the HMD 120 was not present (and may be referred to as a scene camera). The one or more optional image sensors 314 can include one or more RGB cameras (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), one or more infrared (IR) cameras, one or more event-based cameras, and/or the like.
The memory 320 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some embodiments, the memory 320 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 320 optionally includes one or more storage devices remotely located from the one or more processing units 302. The memory 320 comprises a non-transitory computer readable storage medium. In some embodiments, the memory 320 or the non-transitory computer readable storage medium of the memory 320 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 330 and a CGR presentation module 340.
The operating system 330 includes instructions for handling various basic system services and for performing hardware dependent tasks. In some embodiments, the CGR presentation module 340 is configured to present CGR content to the user via the one or more CGR displays 312. To that end, in various embodiments, the CGR presentation module 340 includes a data obtaining unit 342, a CGR presenting unit 344, a CGR map generating unit 346, and a data transmitting unit 348.
In some embodiments, the data obtaining unit 342 is configured to obtain data (e.g., presentation data, interaction data, sensor data, location data, etc.) from at least the controller 110 of
In some embodiments, the CGR presenting unit 344 is configured to present CGR content via the one or more CGR displays 312. To that end, in various embodiments, the CGR presenting unit 344 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some embodiments, the CGR map generating unit 346 is configured to generate a CGR map (e.g., a 3D map of the mixed reality scene or a map of the physical environment into which computer generated objects can be placed to generate the computer generated reality) based on media content data. To that end, in various embodiments, the CGR map generating unit 346 includes instructions and/or logic therefor, and heuristics and metadata therefor.
In some embodiments, the data transmitting unit 348 is configured to transmit data (e.g., presentation data, location data, etc.) to at least the controller 110, and optionally one or more of the input devices 125, output devices 155, sensors 190, and/or peripheral devices 195. To that end, in various embodiments, the data transmitting unit 348 includes instructions and/or logic therefor, and heuristics and metadata therefor.
Although the data obtaining unit 342, the CGR presenting unit 344, the CGR map generating unit 346, and the data transmitting unit 348 are shown as residing on a single device (e.g., the display generation component 120 of
Moreover,
In some embodiments, the hand tracking device 140 includes image sensors 404 (e.g., one or more IR cameras, 3D cameras, depth cameras, and/or color cameras, etc.) that capture three-dimensional scene information that includes at least a hand 406 of a human user. The image sensors 404 capture the hand images with sufficient resolution to enable the fingers and their respective positions to be distinguished. The image sensors 404 typically capture images of other parts of the user's body, as well, or possibly all of the body, and may have either zoom capabilities or a dedicated sensor with enhanced magnification to capture images of the hand with the desired resolution. In some embodiments, the image sensors 404 also capture 2D color video images of the hand 406 and other elements of the scene. In some embodiments, the image sensors 404 are used in conjunction with other image sensors to capture the physical environment of the scene 105, or serve as the image sensors that capture the physical environments of the scene 105. In some embodiments, the image sensors 404 are positioned relative to the user or the user's environment in a way that a field of view of the image sensors or a portion thereof is used to define an interaction space in which hand movement captured by the image sensors are treated as inputs to the controller 110.
In some embodiments, the image sensors 404 outputs a sequence of frames containing 3D map data (and possibly color image data, as well) to the controller 110, which extracts high-level information from the map data. This high-level information is typically provided via an Application Program Interface (API) to an application running on the controller, which drives the display generation component 120 accordingly. For example, the user may interact with software running on the controller 110 by moving his hand 408 and changing his hand posture.
In some embodiments, the image sensors 404 project a pattern of spots onto a scene containing the hand 406 and captures an image of the projected pattern. In some embodiments, the controller 110 computes the 3D coordinates of points in the scene (including points on the surface of the user's hand) by triangulation, based on transverse shifts of the spots in the pattern. This approach is advantageous in that it does not require the user to hold or wear any sort of beacon, sensor, or other marker. It gives the depth coordinates of points in the scene relative to a predetermined reference plane, at a certain distance from the image sensors 404. In the present disclosure, the image sensors 404 are assumed to define an orthogonal set of x, y, z axes, so that depth coordinates of points in the scene correspond to z components measured by the image sensors. Alternatively, the hand tracking device 440 may use other methods of 3D mapping, such as stereoscopic imaging or time-of-flight measurements, based on single or multiple cameras or other types of sensors.
In some embodiments, the hand tracking device 140 captures and processes a temporal sequence of depth maps containing the user's hand, while the user moves his hand (e.g., whole hand or one or more fingers). Software running on a processor in the image sensors 404 and/or the controller 110 processes the 3D map data to extract patch descriptors of the hand in these depth maps. The software matches these descriptors to patch descriptors stored in a database 408, based on a prior learning process, in order to estimate the pose of the hand in each frame. The pose typically includes 3D locations of the user's hand joints and finger tips.
The software may also analyze the trajectory of the hands and/or fingers over multiple frames in the sequence in order to identify gestures. The pose estimation functions described herein may be interleaved with motion tracking functions, so that patch-based pose estimation is performed only once in every two (or more) frames, while tracking is used to find changes in the pose that occur over the remaining frames. The pose, motion and gesture information are provided via the above-mentioned API to an application program running on the controller 110. This program may, for example, move and modify images presented on the display generation component 120, or perform other functions, in response to the pose and/or gesture information.
In some embodiments, the software may be downloaded to the controller 110 in electronic form, over a network, for example, or it may alternatively be provided on tangible, non-transitory media, such as optical, magnetic, or electronic memory media. In some embodiments, the database 408 is likewise stored in a memory associated with the controller 110. Alternatively or additionally, some or all of the described functions of the computer may be implemented in dedicated hardware, such as a custom or semi-custom integrated circuit or a programmable digital signal processor (DSP). Although the controller 110 is shown in
In some embodiments, the display generation component 120 uses a display mechanism (e.g., left and right near-eye display panels) for displaying frames including left and right images in front of a user's eyes to thus provide 3D virtual views to the user. For example, a head-mounted display generation component may include left and right optical lenses (referred to herein as eye lenses) located between the display and the user's eyes. In some embodiments, the display generation component may include or be coupled to one or more external video cameras that capture video of the user's environment for display. In some embodiments, a head-mounted display generation component may have a transparent or semi-transparent display through which a user may view the physical environment directly and display virtual objects on the transparent or semi-transparent display. In some embodiments, display generation component projects virtual objects into the physical environment. The virtual objects may be projected, for example, on a physical surface or as a holograph, so that an individual, using the system, observes the virtual objects superimposed over the physical environment. In such cases, separate display panels and image frames for the left and right eyes may not be necessary.
As shown in
In some embodiments, the eye tracking device 130 is calibrated using a device-specific calibration process to determine parameters of the eye tracking device for the specific operating environment 100, for example the 3D geometric relationship and parameters of the LEDs, cameras, hot mirrors (if present), eye lenses, and display screen. The device-specific calibration process may be performed at the factory or another facility prior to delivery of the AR/VR equipment to the end user. The device-specific calibration process may an automated calibration process or a manual calibration process. A user-specific calibration process may include an estimation of a specific user's eye parameters, for example the pupil location, fovea location, optical axis, visual axis, eye spacing, etc. Once the device-specific and user-specific parameters are determined for the eye tracking device 130, images captured by the eye tracking cameras can be processed using a glint-assisted method to determine the current visual axis and point of gaze of the user with respect to the display, in accordance with some embodiments.
As shown in
In some embodiments, the controller 110 renders AR or VR frames 562 (e.g., left and right frames for left and right display panels) and provide the frames 562 to the display 510. The controller 110 uses gaze tracking input 542 from the eye tracking cameras 540 for various purposes, for example in processing the frames 562 for display. The controller 110 optionally estimates the user's point of gaze on the display 510 based on the gaze tracking input 542 obtained from the eye tracking cameras 540 using the glint-assisted methods or other suitable methods. The point of gaze estimated from the gaze tracking input 542 is optionally used to determine the direction in which the user is currently looking.
The following describes several possible use cases for the user's current gaze direction, and is not intended to be limiting. As an example use case, the controller 110 may render virtual content differently based on the determined direction of the user's gaze. For example, the controller 110 may generate virtual content at a higher resolution in a foveal region determined from the user's current gaze direction than in peripheral regions. As another example, the controller may position or move virtual content in the view based at least in part on the user's current gaze direction. As another example, the controller may display particular virtual content in the view based at least in part on the user's current gaze direction. As another example use case in AR applications, the controller 110 may direct external cameras for capturing the physical environments of the CGR experience to focus in the determined direction. The autofocus mechanism of the external cameras may then focus on an object or surface in the environment that the user is currently looking at on the display 510. As another example use case, the eye lenses 520 may be focusable lenses, and the gaze tracking information is used by the controller to adjust the focus of the eye lenses 520 so that the virtual object that the user is currently looking at has the proper vergence to match the convergence of the user's eyes 592. The controller 110 may leverage the gaze tracking information to direct the eye lenses 520 to adjust focus so that close objects that the user is looking at appear at the right distance.
In some embodiments, the eye tracking device is part of a head-mounted device that includes a display (e.g., display 510), two eye lenses (e.g., eye lense(s) 520), eye tracking cameras (e.g., eye tracking camera(s) 540), and light sources (e.g., light sources 530 (e.g., IR or NIR LEDs), mounted in a wearable housing. The Light sources emit light (e.g., IR or NIR light) towards the user's eye(s) 592. In some embodiments, the light sources may be arranged in rings or circles around each of the lenses as shown in
In some embodiments, the display 510 emits light in the visible light range and does not emit light in the IR or NIR range, and thus does not introduce noise in the gaze tracking system. Note that the location and angle of eye tracking camera(s) 540 is given by way of example, and is not intended to be limiting. In some embodiments, a single eye tracking camera 540 located on each side of the user's face. In some embodiments, two or more NIR cameras 540 may be used on each side of the user's face. In some embodiments, a camera 540 with a wider field of view (FOV) and a camera 540 with a narrower FOV may be used on each side of the user's face. In some embodiments, a camera 540 that operates at one wavelength (e.g. 850 nm) and a camera 540 that operates at a different wavelength (e.g. 940 nm) may be used on each side of the user's face.
Embodiments of the gaze tracking system as illustrated in
As shown in
At 610, for the current captured images, if the tracking state is YES, then the method proceeds to element 640. At 610, if the tracking state is NO, then as indicated at 620 the images are analyzed to detect the user's pupils and glints in the images. At 630, if the pupils and glints are successfully detected, then the method proceeds to element 640. Otherwise, the method returns to element 610 to process next images of the user's eyes.
At 640, if proceeding from element 410, the current frames are analyzed to track the pupils and glints based in part on prior information from the previous frames. At 640, if proceeding from element 630, the tracking state is initialized based on the detected pupils and glints in the current frames. Results of processing at element 640 are checked to verify that the results of tracking or detection can be trusted. For example, results may be checked to determine if the pupil and a sufficient number of glints to perform gaze estimation are successfully tracked or detected in the current frames. At 650, if the results cannot be trusted, then the tracking state is set to NO and the method returns to element 610 to process next images of the user's eyes. At 650, if the results are trusted, then the method proceeds to element 670. At 670, the tracking state is set to YES (if not already YES), and the pupil and glint information is passed to element 680 to estimate the user's point of gaze.
Thus, the description herein includes some embodiments of three-dimensional environments (e.g., CGR environments) that include representations of real world objects and representations of virtual objects. For example, a three-dimensional environment optionally includes a representation of a table that exists in the physical environment, which is captured and displayed in the three-dimensional environment (e.g., actively via cameras and displays of an electronic device, or passively via a transparent or translucent display of the electronic device). As described previously, the three-dimensional environment is optionally a mixed reality system in which the three-dimensional environment is based on the physical environment that is captured by one or more sensors of the device and displayed via a display generation component. As a mixed reality system, the device is optionally able to selectively display portions and/or objects of the physical environment such that the respective portions and/or objects of the physical environment appear as if they exist in the three-dimensional environment displayed by the electronic device. Similarly, the device is optionally able to display virtual objects in the three-dimensional environment to appear as if the virtual objects exist in the real world (e.g., physical environment) by placing the virtual objects at respective locations in the three-dimensional environment that have corresponding locations in the real world. For example, the device optionally displays a vase such that it appears as if a real vase is placed on top of a table in the physical environment. In some embodiments, each location in the three-dimensional environment has a corresponding location in the physical environment. Thus, when the device is described as displaying a virtual object at a respective location with respect to a physical object (e.g., such as a location at or near the hand of the user, or at or near a physical table), the device displays the virtual object at a particular location in the three-dimensional environment such that it appears as if the virtual object is at or near the physical object in the physical world (e.g., the virtual object is displayed at a location in the three-dimensional environment that corresponds to a location in the physical environment at which the virtual object would be displayed if it were a real object at that particular location).
In some embodiments, real world objects that exist in the physical environment that are displayed in the three-dimensional environment can interact with virtual objects that exist only in the three-dimensional environment. For example, a three-dimensional environment can include a table and a vase placed on top of the table, with the table being a view of (or a representation of) a physical table in the physical environment, and the vase being a virtual object.
Similarly, a user is optionally able to interact with virtual objects in the three-dimensional environment using one or more hands as if the virtual objects were real objects in the physical environment. For example, as described above, one or more sensors of the device optionally capture one or more of the hands of the user and display representations of the hands of the user in the three-dimensional environment (e.g., in a manner similar to displaying a real world object in three-dimensional environment described above), or in some embodiments, the hands of the user are visible via the display generation component via the ability to see the physical environment through the user interface due to the transparency/translucency of a portion of the display generation component that is displaying the user interface or projection of the user interface onto a transparent/translucent surface or projection of the user interface onto the user's eye or into a field of view of the user's eye. Thus, in some embodiments, the hands of the user are displayed at a respective location in the three-dimensional environment and are treated as if they were objects in the three-dimensional environment that are able to interact with the virtual objects in the three-dimensional environment as if they were real physical objects in the physical environment. In some embodiments, a user is able to move his or her hands to cause the representations of the hands in the three-dimensional environment to move in conjunction with the movement of the user's hand.
In some of the embodiments described below, the device is optionally able to determine the “effective” distance between physical objects in the physical world and virtual objects in the three-dimensional environment, for example, for the purpose of determining whether a physical object is interacting with a virtual object (e.g., whether a hand is touching, grabbing, holding, etc. a virtual object or within a threshold distance from a virtual object). For example, the device determines the distance between the hands of the user and virtual objects when determining whether the user is interacting with virtual objects and/or how the user is interacting with virtual objects. In some embodiments, the device determines the distance between the hands of the user and a virtual object by determining the distance between the location of the hands in the three-dimensional environment and the location of the virtual object of interest in the three-dimensional environment. For example, the one or more hands of the user are located at a particular position in the physical world, which the device optionally captures and displays at a particular corresponding position in the three-dimensional environment (e.g., the position in the three-dimensional environment at which the hands would be displayed if the hands were virtual, rather than physical, hands). The position of the hands in the three-dimensional environment is optionally compared against the position of the virtual object of interest in the three-dimensional environment to determine the distance between the one or more hands of the user and the virtual object. In some embodiments, the device optionally determines a distance between a physical object and a virtual object by comparing positions in the physical world (e.g., as opposed to comparing positions in the three-dimensional environment). For example, when determining the distance between one or more hands of the user and a virtual object, the device optionally determines the corresponding location in the physical world of the virtual object (e.g., the position at which the virtual object would be located in the physical world if it were a physical object rather than a virtual object), and then determines the distance between the corresponding physical position and the one of more hands of the user. In some embodiments, the same techniques are optionally used to determine the distance between any physical object and any virtual object. Thus, as described herein, when determining whether a physical object is in contact with a virtual object or whether a physical object is within a threshold distance of a virtual object, the device optionally performs any of the techniques described above to map the location of the physical object to the three-dimensional environment and/or map the location of the virtual object to the physical world.
In some embodiments, the same or similar technique is used to determine where and what the gaze of the user is directed to and/or where and at what a physical stylus held by a user is pointed. For example, if the gaze of the user is directed to a particular position in the physical environment, the device optionally determines the corresponding position in the three-dimensional environment and if a virtual object is located at that corresponding virtual position, the device optionally determines that the gaze of the user is directed to that virtual object. Similarly, the device is optionally able to determine, based on the orientation of a physical stylus, to where in the physical world the stylus is pointing. In some embodiments, based on this determination, the device determines the corresponding virtual position in the three-dimensional environment that corresponds to the location in the physical world to which the stylus is pointing, and optionally determines that the stylus is pointing at the corresponding virtual position in the three-dimensional environment.
Similarly, the embodiments described herein may refer to the location of the user (e.g., the user of the device) and/or the location of the device in the three-dimensional environment. In some embodiments, the user of the device is holding, wearing, or otherwise located at or near the electronic device. Thus, in some embodiments, the location of the device is used as a proxy for the location of the user. In some embodiments, the location of the device and/or user in the physical environment corresponds to a respective location in the three-dimensional environment. In some embodiments, the respective location is the location from which the “camera” or “view” of the three-dimensional environment extends. For example, the location of the device would be the location in the physical environment (and its corresponding location in the three-dimensional environment) from which, if a user were to stand at that location facing the respective portion of the physical environment displayed by the display generation component, the user would see the objects in the physical environment in the same position, orientation, and/or size as they are displayed by the display generation component of the device (e.g., in absolute terms and/or relative to each other). Similarly, if the virtual objects displayed in the three-dimensional environment were physical objects in the physical environment (e.g., placed at the same location in the physical environment as they are in the three-dimensional environment, and having the same size and orientation in the physical environment as in the three-dimensional environment), the location of the device and/or user is the position at which the user would see the virtual objects in the physical environment in the same position, orientation, and/or size as they are displayed by the display generation component of the device (e.g., in absolute terms and/or relative to each other and the real world objects).
In the present disclosure, various input methods are described with respect to interactions with a computer system. When an example is provided using one input device or input method and another example is provided using another input device or input method, it is to be understood that each example may be compatible with and optionally utilizes the input device or input method described with respect to another example. Similarly, various output methods are described with respect to interactions with a computer system. When an example is provided using one output device or output method and another example is provided using another output device or output method, it is to be understood that each example may be compatible with and optionally utilizes the output device or output method described with respect to another example. Similarly, various methods are described with respect to interactions with a virtual environment or a mixed reality environment through a computer system. When an example is provided using interactions with a virtual environment and another example is provided using mixed reality environment, it is to be understood that each example may be compatible with and optionally utilizes the methods described with respect to another example. As such, the present disclosure discloses embodiments that are combinations of the features of multiple examples, without exhaustively listing all features of an embodiment in the description of each example embodiment.
In addition, in methods described herein where one or more steps are contingent upon one or more conditions having been met, it should be understood that the described method can be repeated in multiple repetitions so that over the course of the repetitions all of the conditions upon which steps in the method are contingent have been met in different repetitions of the method. For example, if a method requires performing a first step if a condition is satisfied, and a second step if the condition is not satisfied, then a person of ordinary skill would appreciate that the claimed steps are repeated until the condition has been both satisfied and not satisfied, in no particular order. Thus, a method described with one or more steps that are contingent upon one or more conditions having been met could be rewritten as a method that is repeated until each of the conditions described in the method has been met. This, however, is not required of system or computer readable medium claims where the system or computer readable medium contains instructions for performing the contingent operations based on the satisfaction of the corresponding one or more conditions and thus is capable of determining whether the contingency has or has not been satisfied without explicitly repeating steps of a method until all of the conditions upon which steps in the method are contingent have been met. A person having ordinary skill in the art would also understand that, similar to a method with contingent steps, a system or computer readable storage medium can repeat the steps of a method as many times as are needed to ensure that all of the contingent steps have been performed.
User Interfaces and Associated Processes
Attention is now directed towards embodiments of user interfaces (“UI”) and associated processes that may be implemented on a computer system, such as portable multifunction device or a head-mounted device, with a display generation component, one or more input devices, and (optionally) one or cameras.
As shown in
In
In some embodiments, device 101 displays three-dimensional environment 701 and/or user interfaces displayed via display generation component 120 at a particular level of immersion. As will be described in more detail later with reference to
In
In some embodiments, device 101 only performs the above-described de-emphasis when the user of device 101 looks at certain types of objects (e.g., representations of media player applications) and not other types of objects (e.g., representations of other types of applications). For example, in
In the method 800, in some embodiments, an electronic device (e.g., computer system 101 in
In method 800, in some embodiments, an electronic device (e.g., computer system 101 in
In some embodiments, the electronic device displays the content of the first application in an application window. For example, displaying a video in a video content playback application, such as an application via which one or more types of content (e.g., music, songs, television episodes, movies, etc.) can be browsed and/or in which the content can be played. In some embodiments, the second user interface is a system user interface, such as a user interface of the operating system of the electronic device, in which the first user interface is displayed. In some embodiments, the first user interface is overlaid on the second user interface. In some embodiments, the first user interface was displayed in response to an input to display the first user interface that was received while the electronic device was displaying the second user interface. In some embodiments, the second user interface is not associated with any single application in particular (e.g., because it is an operating system user interface rather than an application user interface). In some embodiments, the second user interface includes or is a three-dimensional environment within which the first user interface is displayed, such as displaying the first user interface within a computer-generated reality (CGR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc. that is generated, displayed, or otherwise caused to be viewable by the electronic device.
In some embodiments, while concurrently displaying the first user interface and the portion of the second user interface that surrounds at least the portion of the first user interface, the electronic device detects (808), via the eye tracking device, that a gaze of a user is directed to the first user interface, such as gaze 714 directed to representation 712 in
In some embodiments, in response to detecting that the gaze of the user is directed to the first user interface, in accordance with a determination that one or more criteria are satisfied (e.g., the one or more criteria include a criterion that is satisfied when the first application is a first type of application (e.g., a media player application for browsing and/or viewing photos, videos, movies, etc.), but is not satisfied when the first application is a second type of application, different from the first type of application (e.g., a word processing application). In some embodiments, the one or more criteria are satisfied without any additional input from the user other than the gaze of the user being directed to the first user interface (e.g., the second user interface is automatically deemphasized with respect to the first user interface in response to the gaze input). In some embodiments, the one or more criteria include a criterion that is satisfied when the gaze of the user is directed to the first user interface for longer than a time threshold, such as 0.5, 1, 2, 5, 10 seconds, and not satisfied otherwise), the electronic device deemphasizes (810) the second user interface with respect to the first user interface, such as deemphasizing content of the three-dimensional environment 701 with respect to representation 712 in
In some embodiments, the one or more criteria include a criterion that is satisfied when the first application is an application of a first type, and is not satisfied when the first application is an application of a second type, different from the first type (812) (e.g., the criterion is satisfied for applications that are applications for browsing and/or viewing content or media (e.g., applications in which movies, images, music, television shows, etc. are viewable), and is not satisfied for applications that are not applications for viewing content or media (e.g., word processing applications, calendar applications, spreadsheet applications, etc.)). In some embodiments, in response to detecting that the gaze of the user is directed to the first user interface, in accordance with a determination that the one or more criteria are not satisfied (e.g., because the first application is not a content/media browsing and/or viewing application), the electronic device forgoes deemphasizing (814) the second user interface with respect to the first user interface. For example, the display of the first user interface and the second user interface remain as they were before the gaze of the user was detected as being directed to the first user interface, and optionally the relative emphasis of the first user interface with respect to the second user interface, and vice versa, remains as it was before the gaze of the user was detected as being directed to the first user interface. Therefore, the second user interface is optionally not darkened or blurred in response to the gaze of the user being directed to the first user interface. In some embodiments, the gaze-based de-emphasis of the first user interface with respect to the second user interface occurs for all types of applications, including applications other than content and/or media viewing applications (e.g., the one or more criteria do not include the application type criterion described above). The above-described manner of performing gaze-based de-emphasis of user interfaces based on the type of application associated with the user interfaces provides a quick and efficient way of only emphasizing/de-emphasizing user interfaces in situations where such emphasis/de-emphasis is likely desired, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by avoiding erroneous emphasis/de-emphasis of user interfaces, which then requires additional user input to correct), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, after detecting that the gaze of the user is directed to the first user interface and while displaying the second user interface as deemphasized with respect to the first user interface (e.g., the gaze of the user being directed to the first user interface caused the second user interface to be deemphasized with respect to the first user interface), the electronic device detects (816), via the eye tracking device, that the gaze of the user is directed to the second user interface, such as gaze 714 moving back outside of representation 712, such as in
In some embodiments, the display of the first user interface and the second user interface returns to how the two user interfaces were displayed (e.g., in an absolute sense and/or relative to one another) before the gaze of the user was detected as being directed to the first user interface. In some embodiments, the second user interface is brightened and/or un-blurred and/or otherwise less obscured. In some embodiments, the first user interface is darkened and/or blurred and/or otherwise more obscured. In some embodiments, the second user interface becomes more emphasized relative to the first user interface as compared with the relative display of the first and second user interfaces before the gaze of the user was directed to the first user interface. In some embodiments, the second user interface becomes less emphasized relative to the first user interface as compared with the relative display of the first and second user interfaces before the gaze of the user was directed to the first user interface. The above-described manner of reverting display of the user interfaces based on gaze provides a quick and efficient way of returning to the prior display of the user interfaces, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by not requiring additional or different kinds of user input to revert display of the user interfaces), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, before detecting that the gaze of the user is directed to the first user interface, the first user interface and the second user interface are displayed concurrently with a representation of at least a portion of a physical environment of the electronic device (820), such as representations 702b and 704b in
In some embodiments, deemphasizing the second user interface with respect to the first user interface includes deemphasizing the representation of the portion of the physical environment of the electronic device with respect to the first user interface (822), such as the deemphasizing of representations 702b and 704b shown in
In some embodiments, before detecting that the gaze of the user is directed to the first user interface, the first user interface and the second user interface are displayed concurrently with a virtual object that is not in a physical environment of the electronic device (824), such as representations 706, 708 and 710 in
In some embodiments, before detecting that the gaze of the user is directed to the first user interface, the first user interface and the second user interface are displayed concurrently with a representation of at least a portion of a physical environment of the electronic device (828), such as representations 702b and 704b in
In some embodiments, the first user interface and the second user interface are displayed concurrently with a virtual object that is not in the physical environment of the electronic device (828), such as representations 706, 708 and 710 in
In some embodiments, the second user interface includes a virtual object that is not in a physical environment of the electronic device (836), such as representations 706, 708 and 710 in
In some embodiments, before detecting that the gaze of the user is directed to the first user interface, the first user interface is displayed at a first level of immersion (838), such as level of immersion 718 in
In some embodiments, the first user interface is displayed overlaid on the second user interface (842), such as representation 712 overlaid on content of three-dimensional environment 701 in
As shown in
In
In some embodiments, device 101 displays three-dimensional environment 901 and/or user interfaces displayed via display generation component 120 at a particular level of immersion. In
In
In
As previously mentioned, in some embodiments, the levels of immersion at which different user interfaces are displayed are optionally set independently from one another. For example, changes in the level of immersion at which operating system user interfaces are displayed optionally do not change the level of immersion at which application user interfaces are displayed, and in some embodiments, changes in the level of immersion at which the user interface of a first application is displayed do not change the level of immersion at which the user interface of a second application is displayed. Thus, in some embodiments, in response to an input to switch from displaying one user interface to displaying another user interface, device 101 optionally displays the switched-to user interface at the level of immersion at which that user interface was last displayed, independent of any changes of immersion that were applied to the currently-displayed user interface.
For example, in
As shown in
As previously described, changes in immersion at which a user interface in one application is displayed optionally do not change the level of immersion at which user interfaces of other applications and/or user interfaces of the operating system are displayed. For example, if device 101 detects an input to change the level of immersion of user interface 934 of App C in
In some embodiments, applications (e.g., App C) include controls in their user interfaces for changing the level of immersion at which those applications are displayed. For example, in some embodiments, user interface 934 of App C includes controls for increasing or decreasing the level of immersion at which user interface 934 is displayed, and the immersion at which App C is displayed is changed by device 101 in response to interaction with those controls additionally or alternatively to interaction with input element 920. Further, in some embodiments, immersion above a threshold (e.g. maximum immersion) for an operating system user interface is reachable in response to inputs detected at input element 920. However, in some embodiments, immersion above that threshold for an application user interface is not reachable in response to inputs detected at input element 920—in some embodiments, the immersion at which an application user interface is displayed can only reach the threshold immersion using the input element, and once at that threshold, a different type of input is required to increase the immersion at which the application user interface is displayed beyond that threshold. For example, in
In the method 1000, in some embodiments, an electronic device (e.g., computer system 101 in
In some embodiments, the first respective user interface is a user interface generated by and/or displayed by the operating system of the electronic device, rather than by an application (e.g., installed) on the electronic device. For example, the first respective user interface is optionally an application browsing and/or launching user interface that is a system user interface, and optionally includes a plurality of selectable representations of different applications that when selected cause the electronic device to display a user interface of the selected application (e.g., launch the selected application). In some embodiments, the first respective user interface is displayed within a computer-generated reality (CGR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc. that is generated, displayed, or otherwise caused to be viewable by the electronic device.
In some embodiments, the first respective user interface is displayed at a first level of immersion (1004), such as level of immersion 918 in
In some embodiments, while displaying the first respective user interface at the first level of immersion, the electronic device receives (1006), via the one or more input devices, a first user input corresponding to a request to display a second respective user interface of a respective application, such as the input on the icon for App C in
In some embodiments, while displaying the second respective user interface at the second level of immersion, the electronic device receives (1012), via the one or more input devices, a second user input corresponding to a request to change a current level of immersion associated with the electronic device, such as an input on input element 920 in
In some embodiments, while displaying the second respective user interface at the third level of immersion, the electronic device receives (1016), via the one or more input devices, a third user input corresponding to a request to display the first respective user interface, such as an input to cease display of user interface 934 in
In some embodiments, the one or more input devices include a rotatable input element, and receiving the second user input corresponding to the request to change the current level of immersion associated with the electronic device includes detecting rotation of the rotatable input element (1022), such as described with reference to input element 920. For example, the electronic device includes or is in communication with a mechanical input element that rotates in response to rotational user input provided to it, and in some embodiments is able to be depressed in response to depression user input provided to it. In some embodiments, rotational input of the mechanical input element corresponds to an input to change the current level of immersion of the user interface(s) currently displayed by the electronic device, via the display generation component. In some embodiments, the direction of the rotation (e.g., clockwise or counterclockwise) defines whether the input is a request to increase (e.g., clockwise) or decrease (e.g., counterclockwise) the level of immersion of the displayed user interface(s). In some embodiments, the magnitude of the rotation defines the amount by which the level of immersion of the displayed user interface(s) is changed. The above-described manner of changing the level of immersion of the displayed user interface(s) provides a quick and efficient manner of changing the level of immersion of the displayed user interface(s), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by avoiding the need to display user interface elements for changing the level of immersion), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, while detecting the rotation of the rotatable element (1024), in accordance with a determination that a current level of immersion of a currently displayed user interface is a minimum level of immersion, the electronic device generates (1026), via a tactile output generator (e.g., coupled to the rotatable input element, and/or coupled to the housing of the device to which the rotatable input element is optionally coupled, and/or etc.), a first tactile output with a respective characteristic having a first value, such as described with reference to
In some embodiments, in accordance with a determination that the current level of immersion of the currently displayed user interface is a maximum level of immersion, the electronic device generates (1028), via the tactile output generator, a second tactile output with the respective characteristic having the first value, such as described with reference to
In some embodiments, in accordance with a determination that the current level of immersion of the currently displayed user interface is an intermediate level of immersion, the electronic device generates (1030), via the tactile output generator, a third tactile output with the respective characteristic having a second value, different from the first value, such as described with reference to
In some embodiments, the second user input includes a first portion and a second portion (1032) (e.g., of a total clockwise rotation of 360 degrees, a first clockwise rotation of 180 degrees followed by a second clockwise rotation of 180 degrees). In some embodiments, while receiving the second user input (1034), in response to receiving the first portion of the second user input, the electronic device displays (1036), via the display generation component, the second respective user interface at a fourth level of immersion that is in between the second level of immersion and the third level of immersion, such as described with reference to
In some embodiments, receiving the second user input corresponding to the request to change the current level of immersion associated with the electronic device includes (1040), in accordance with a determination that the second respective user interface is a user interface of a first application, detecting interaction with a first respective control displayed with the second respective user interface (1042), and in accordance with a determination that the second respective user interface is a user interface of a second application, different from the first application, detecting interaction with a second respective control, different from the first respective control, displayed with the second user interface (1044), such as described with reference to
In some embodiments, at the second level of immersion, a first amount of a physical environment of the electronic device is replaced, via the display generation component, with a respective user interface displayed by the electronic device (1046), such as shown in
In some embodiments, at the second level of immersion, a first amount of a physical environment of the electronic device is deemphasized, via the display generation component, by the electronic device (1050), such as shown in
In some embodiments, at the second level of immersion, a first amount of display area of the display generation component is occupied by a respective user interface displayed by the electronic device (1054), and at the third level of immersion, a second area, different from the first area, of the display area of the display generation component is occupied by the respective user interface displayed by the electronic device (1056), such as the area of user interface 912 shown from
In some embodiments, changing the second respective user interface from being displayed at the second level of immersion to being displayed at the third level of immersion includes changing atmospheric effects displayed via the display generation component (1058), such as shown in the transition from
In some embodiments, the second respective user interface includes a respective user interface element displayed by the electronic device, and changing the second respective user interface from being displayed at the second level of immersion to being displayed at the third level of immersion includes changing display of the respective user interface element (1060), such as the change in display of objects 906, 908 and/or 910 in the transition from
In some embodiments, changing the second respective user interface from being displayed at the second level of immersion to being displayed at the third level of immersion includes changing audio effects generated by the electronic device (1062). In some embodiments, the electronic device generates audio for playback via one or more speakers or audio output devices (e.g., headphones) in communication with the electronic device while displaying content/user interface(s) (e.g., atmospheric sound effects, sounds corresponding to (e.g., generated as being emitted from) virtual objects (e.g., content of an application), sounds corresponding to (e.g., generated as being emitted from) physical objects in the physical environment of the electronic device, etc.). In some embodiments, changing the level of immersion optionally changes the audio generated by the electronic device. For example, in some embodiments, increasing the level of immersion optionally causes the sounds corresponding to physical objects to be deemphasized (e.g., reduced in volume, clarity, expansiveness, etc.) and/or sounds corresponding to virtual objects to be emphasized (e.g., increased in volume, clarity, expansiveness). In some embodiments, increasing the level of immersion optionally causes atmospheric sound effects (e.g., sounds corresponding to atmospheric effects described above) to be emphasized (e.g., increased in volume, clarity, expansiveness) and/or sounds corresponding to physical objects to be deemphasized (e.g., reduced in volume, clarity, expansiveness, etc.). In some embodiments, at high (e.g., maximum) immersion, sounds corresponding to physical objects are not generated or reduced to a low level (e.g., minimum level), and sounds corresponding to virtual objects are increased to a high level (e.g., maximum level); in some embodiments, at low (e.g., minimum) immersion, sounds corresponding to physical objects are increased to a high level (e.g., maximum level), and sounds corresponding to virtual objects are not generated or reduced to a low level (e.g., minimum level). The above-described manner of varying audio effects based on immersion reduces interference of the physical environment with the user interface(s) displayed by the electronic device as the level of immersion is increased, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by avoiding distraction caused by audio corresponding to the physical environment at higher levels of immersion), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, displaying the second respective user interface at the second level of immersion includes concurrently displaying content of a first application in a first portion of a display area of the display generation component, and one or more representations of a first respective portion of a physical environment of the electronic device in a second portion of the display area of the display generation component (1064), such as described with reference to user interface 934 in
In some embodiments, displaying the second respective user interface at the second level of immersion includes concurrently displaying content of a first application (e.g., playback of a video, display of a photo, etc.) and one or more representations of a physical environment (1068), such as shown in
In some embodiments, after displaying the second respective user interface at the third level of immersion and the first respective user interface at the first level of immersion, and without receiving an input to change the level of immersion of the first respective user interface from the first level of immersion, and without receiving an input to change the level of immersion of the second respective user interface from the third level of immersion (1072) (e.g., the last level of immersion at which the second respective user interface was displayed was the third level of immersion, and the last level of immersion at which the first respective user interface was displayed was the first level of immersion, and no input has been received after displaying those user interfaces at those levels of immersion to change those levels of immersion), the electronic device receives (1074), via the one or more input devices, a fourth user input corresponding to a request to display a third respective user interface of a second respective application, different from the respective application (e.g., the input is received while displaying the second respective user interface at the third level of immersion, the first respective user interface at the first level of immersion, or a different user interface at a different level of immersion). In some embodiments, the fourth user input includes detecting selection of the application icon for the second respective application in an application browsing user interface. In some embodiments, the second user input is a voice input for displaying the second respective application, without selecting any icon for the second respective application.
In some embodiments, in response to receiving the fourth user input (1076), the electronic device displays, via the display generation component, the third respective user interface at a fourth level of immersion (e.g., in some embodiments, the third respective user interface is displayed overlaid on the user interface that was displayed when the fourth user input was received, both of which optionally continue to be displayed over the background and/or within the three-dimensional environment). In some embodiments, displaying the third respective user interface includes ceasing display of the user interface that was displayed when the fourth user input was received. In some embodiments, the fourth level of immersion is independent of the level of immersion of the operating system user interface and/or the user interface of the respective application. Therefore, in some embodiments, the level of immersion of operating system user interfaces and/or applications does not affect the level of immersion of other application user interfaces (e.g., each application has its own, independent level of immersion). In some embodiments, the fourth level of immersion at which the third respective user interface is displayed is the level of immersion with which the third respective user interface was last displayed (prior to the current display of the third respective user interface). In some embodiments, the level of immersion of operating system user interfaces do affect the level of immersion of application user interfaces (e.g., if the system user interface is set at a first level of immersion, subsequently displaying an application user interface includes displaying the application user interface at that same first level of immersion, even if the application user interface was last displayed at a different level of immersion).
In some embodiments, while displaying the third respective user interface at the fourth level of immersion, the electronic device receives, via the one or more input devices, a fifth user input corresponding to a request to change the current level of immersion associated with the electronic device (1078), such as an input at input element 920 (e.g., a voice input, a hand gesture, a selection of a displayed user interface element and/or detection of an input (e.g., rotational input) at a mechanical input element included in the electronic device that corresponds to a request to increase or decrease the level of immersion of the currently-displayed user interface). In some embodiments, in response to receiving the fifth user input, the electronic device displays (1080), via the display generation component, the third respective user interface at a fifth level of immersion, different from the fourth level of immersion (e.g., increasing or decreasing the level of immersion of the third respective user interface in accordance with the fifth user input). In some embodiments, after receiving the fifth user input (1082), the electronic device receives (1084), via the one or more input devices, a sixth user input corresponding to a request to display the first respective user interface (e.g., a request to redisplay the first respective user interface after receiving the input to change the level of immersion of the third respective user interface). In some embodiments, in response to receiving the sixth user input, the electronic device displays (1086), via the display generation component, the first respective user interface at the first level of immersion, such as the level of immersion shown in
In some embodiments, after receiving the third user input and while displaying the first respective user interface at the first level of immersion, the electronic device receives (1092), via the one or more input devices, a fourth user input corresponding to a request to display the second respective user interface (e.g., an input to redisplay the respective application after having ceased display of the respective application to display a user interface of the operating system, such as an application browsing user interface). In some embodiments, the fourth user input is received while displaying the application browsing user interface (e.g., selection of an icon corresponding to the respective application). In some embodiments, in response to receiving the fourth user input, the electronic device displays (1094), via the display generation component, the second respective user interface at the third level of immersion, such as the level of immersion shown in
In some embodiments, the second user input is a user input of a first type and corresponds to a request to increase the current level of immersion associated with the electronic device, and the third level of immersion is greater than the second level of immersion (1096) (e.g., a clockwise rotation of a rotatable input element of the electronic device for increasing the level of immersion of the currently display user interface while displaying the user interface of an application (e.g., rather than a user interface of the operating system of the electronic device)). In some embodiments, while displaying the second respective user interface at the third level of immersion, the electronic device receives (1098), via the one or more input devices, a fourth user input of the first type corresponding to a request to increase the current level of immersion associated with the electronic device, such as an input at input element 920 (e.g., a further clockwise rotation of a rotatable input element of the electronic device for increasing the level of immersion of the currently display user interface while displaying the user interface of the respective application). In some embodiments, in response to receiving the fourth user input of the first type, the electronic device maintains display (1097) of the second respective user interface at the third level of immersion, such as described with reference to user interface 934 in
In some embodiments, while displaying the second respective user interface at the third level of immersion, the electronic device receives (1095), via the one or more input devices, a fifth user input of a second type, different from the first type, corresponding to a request to increase the current level of immersion associated with the electronic device, such as selection of element 950 in
As shown in
In
In some embodiments, device 101 displays three-dimensional environment 1101 and/or user interfaces displayed via display generation component 120 at a particular level of immersion. For example, in
Similar to as described with reference to
In some embodiments, in response to detecting a depression of input element 1120, device 101 reduces the level of immersion at which the currently displayed user interface(s) is displayed to a predetermined level of immersion (e.g., a minimum or relatively low level of immersion), and in response to detecting a release of the depression of input element 1120, device 101 resumes displaying the user interface(s) at the level of immersion at which it was displayed when the depression input was detected. In some embodiments, device 101 requires that the depression of input element 1120 be for longer than a time threshold (e.g., 0.1, 0.5, 1, 5, 10 seconds) before transitioning to the low level of immersion. For example, in
As previously mentioned, in some embodiments, in response to detecting release of input element 1120, device 101 optionally returns to the previous level of immersion at which it was displaying user interface(s) when depression of input element 1120 was detected. For example, in
In the method 1200, in some embodiments, an electronic device (e.g., computer system 101 in
In some embodiments, the first level of immersion of the first user interface optionally has one or more of the characteristics of levels of immersion described with reference to methods 1000, 1400 and 1600. The first input is optionally a user input detected via the one or more input devices, such as a voice input, a hand gesture, a selection of a displayed user interface element and/or detection of an input (e.g., rotational input) at a mechanical input element included in the electronic device that corresponds to a request to increase or decrease the level of immersion of the currently-displayed user interface. In some embodiments, the first user interface is displayed within a three-dimensional environment displayed by the electronic device via the display generation component, such as a computer-generated reality (CGR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment. In some embodiments, in response to detecting the first input, the electronic device displays (1204), via the display generation component, the first user interface at a respective level of immersion, greater than the first level of immersion, such as displaying user interface 1134 at the level of immersion 1132 in
In some embodiments, while displaying the first user interface at the respective level of immersion, the electronic device detects (1206), via the one or more input devices, occurrence of a first event, such as detecting input 1 on input element 1120 in
In some embodiments, in response to detecting the occurrence of the first event, the electronic device reduces (1208) a level of immersion of a user interface displayed via the display generation component to a second level of immersion that is lower than the respective level of immersion, such as the level of immersion 1132 shown in
In some embodiments, while displaying the user interface at the second level of immersion, the electronic device detects (1210), via the one or more input devices, occurrence of a second event that corresponds to a request to resume displaying the first user interface, such as detecting input 2 at input element 1120 in
In some embodiments, in response to detecting the occurrence of the second event, the electronic device resumes (1212) display of the first user interface via the display generation component, such as reverting back to the display of user interface 1134 in
In some embodiments, the second level of immersion is no immersion (1218), such as shown in
In some embodiments, the one or more input devices include a rotatable input element (1220), such as described with reference to
In some embodiments, detecting the occurrence of the first event includes detecting depression of the rotatable input element (1230), such as input 1 on input element 1120 in
In some embodiments, detecting the occurrence of the second event includes detecting rotation of the rotatable input element (1232), such as described with reference to
In some embodiments, while displaying the user interface at the second level of immersion, the electronic device detects (1234) a second respective rotation of the rotatable input element, including a first portion followed by a second portion of the second respective rotation (e.g., detecting a first part of a clockwise rotation of the rotatable input element followed by a continued second part of the clockwise rotation of the rotatable input element). In some embodiments, (e.g., while detecting the second respective rotation of the rotatable input element) in response to detecting the first portion of the second respective rotation, the electronic device displays (1236), via the display generation component, the first user interface at a seventh level of immersion greater than the second level of immersion but less than the respective level of immersion (e.g., gradually increasing the level of immersion from the second level of immersion in accordance with an amount of rotation of the rotatable input element during the first portion of the second respective rotation). In some embodiments, (e.g., while detecting the second respective rotation of the rotatable input element) in response to detecting the second portion of the second respective rotation, the electronic device increases (1238) the level of immersion of the first user interface from the seventh level of immersion to the respective level of immersion. For example, continuing to increase the level of immersion in accordance with a continued amount of rotation of the rotatable input element during the second portion of the second respective rotation. If the second respective rotation includes a total amount of rotation sufficient to increase the level of immersion from the second level of immersion to the respective level of immersion, then the first user interface is optionally displayed at the respective level of immersion. If the total amount of rotation of the second respective rotation corresponds to a level of immersion other than the respective level of immersion, the first user interface is optionally displayed at that other level of immersion. The above-described manner of gradually returning to the previous level of immersion of the device provides consistent response to the rotation of the rotatable input element, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, detecting the occurrence of the second event includes detecting, via a hand tracking device in communication with the display generation component, a respective gesture performed by a hand of a user of the electronic device (1240) (e.g., a particular pinch gesture between two or more fingers of the hand of the user, such as the thumb and forefinger of the user. In some embodiments, the gesture includes a particular movement and/or rotation of the hand while maintaining the pinch gesture of the hand. In some embodiments, if a different gesture other than the respective gesture is detected as being performed by the hand of the user, the electronic device optionally does not resume display of the first user interface at the respective level of immersion. In some embodiments, the respective hand gesture is different from the first event that caused the electronic device to transition to the second level of immersion). The above-described manner of resuming the previous level of immersion of the device provides a quick and efficient way of returning to the level of immersion previously in effect while avoiding accidental return to such immersion, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient, which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, detecting the occurrence of the first event includes detecting depression of a mechanical input element in communication with the electronic device (1242), similar to input 1 on input element 1120 in
In some embodiments, detecting the occurrence of the first event includes detecting depression of a mechanical input element in communication with the electronic device for longer than a time threshold (1244), similar to input 1 on input element 1120 in
In some embodiments, detecting the occurrence of the second event includes detecting release of a depression of a mechanical input element in communication with the electronic device (1246), such as input 2 detected at input element 1120 in
Device 101 optionally captures one or more images of the physical environment 1300 around device 101 (e.g., operating environment 100), including one or more objects in the physical environment 1300 around device 101. For example, in
In
In some embodiments, device 101 is able to detect one or more characteristics of objects, people, one or more portions of the physical environment 1300, etc., and in response to detecting certain characteristics, device 101 optionally allows one or more representations of those objects, people, or one or more portions of the physical environment 1300 to “break through” user interface 1334 such that they become visible via display generation component 120 (e.g., when they were not visible prior to detecting those characteristics). In some embodiments, representations of those objects, people, or one or more portions of the physical environment 1300 break through portions of user interface 1334 that correspond to the locations of those objects, people, or one or more portions of the physical environment 1300, and remaining portions of user interface 1334 remain displayed via display generation component 120.
For example, in
Device 101 optionally determines that the attention of a person is directed to the user of device 101 using one or more sensors on device 101 that capture images of, sounds of, and/or otherwise detect characteristics of, physical environment 1300. For example, in some embodiments, if the person is looking towards the electronic device and/or the user of the electronic device, the attention of the person is optionally determined to be directed to the user of the electronic device. In some embodiments, if the person is speaking towards the electronic device and/or the user of the electronic device, the attention of the person is optionally determined to be directed to the user of the electronic device. In some embodiments, an electronic device associated with the person detects, using sensors in communication with that electronic device, the relevant attention of the person and transmits such attention information to the electronic device of the user, which then responds accordingly. In some embodiments, the determination of attention of a person is based on one or more factors described with reference to method 1400, including based on gestures from that person (e.g., gestures directed to the user of device 101), speech from that person (e.g., speech directed to the user of device 101, speaking the name of the user of device 101, etc.), the speed with which the person is moving towards the user of device 101, the gaze of the person (e.g., gaze directed to the user of device 101), etc.
In some embodiments, additionally or alternatively to using factors about the attention of a person in physical environment 1300 to cause that person to break through user interface 1334, device 101 utilizes one or more factors about the interaction of the user of device 101 with that person in physical environment 1300 to cause that person to break through user interface 1334. For example, in some embodiments, if the user of device 101 is engaging with (e.g., gesturing towards, speaking towards, moving towards, etc.) the person, and device 101 detects such engagement of the user with that person, device 101 causes that person to break through user interface 1334. In some embodiments, device 101 uses a combination of factors including the attention of that person and the engagement of the user of device 101 with that person to break that person through user interface 1334.
In some embodiments, device 101 similarly causes objects that are further than the threshold distance 1340 from device 101 to break through user interface 1334 if device 101 determines that those objects are high risk objects, thus allowing the user of device 101 to see those objects via display generation component 120 (e.g., to avoid potential danger). For example, device 101 optionally determines that objects are high risk based on one or more factors described with reference to method 1400, including whether those objects will collide with the user of device 101 (e.g., based on detected trajectories of those objects). In
As shown in
In some embodiments, device 101 allows a person, object, portion of physical environment 1300, etc., to break through user interface 1334 to various degrees based on a confidence of the determination that the object is a high risk object (e.g., in the case of objects), the attention of the person is directed to the user of device 101 (e.g., in the case of people), the user of device 101 is engaging with the person (e.g., in the case of people), etc. For example, in some embodiments, the fewer of the break through factors described herein and with reference to method 1400 a given object or person satisfies, the lower the confidence associated with that break through, and the more of the break through factors described herein and with reference to method 1400 a given object or person satisfies, the higher the confidence associated with that break through. In some embodiments, as the confidence of a given break through increases, device 101 displays the representation of that person, object, etc. through user interface 1334 with less translucency, more color saturation, higher brightness, etc. and/or displays the portion of user interface 1334 through which the representation of that person, object, etc. is displayed with more translucency, less color saturation, lower brightness, etc. For example, in
In some embodiments, device 101 breaks through objects or people that are within threshold distance 1340 of device 101 whether or not those objects are high risk, the attention of those people is directed to the user of device 101, or the user of device 101 is engaging with those people. For example, in
In the method 1400, in some embodiments, an electronic device (e.g., computer system 101 in
In some embodiments, the electronic device displays a three-dimensional environment, such as a computer-generated reality (CGR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc. that is generated, displayed, or otherwise caused to be viewable by the electronic device. In some embodiments, the user interface element is a virtual element or object (e.g., a user interface of an application on the electronic device, a virtual object displayed in the three-dimensional environment, etc.). In some embodiments, the electronic device is not displaying a three-dimensional environment, but rather simply displaying content (e.g., the user interface with the user interface element) via the display generation component.
In some embodiments, the user interface element at least partially obscures display, via the display generation component, of a respective person in a physical environment of the electronic device (1404), such as obscuring display of people 1302a and 1308a in
In some embodiments, while displaying the user interface (1406), in accordance with a determination that the respective person is further than a threshold distance from the electronic device in the physical environment, such as people 1302a and 1308a in
In some embodiments, in accordance with a determination the respective person is further than the threshold distance from the electronic device in the physical environment and that the attention of the respective person is not directed to the user of the electronic device (e.g., the respective person is not looking at and/or speaking towards the electronic device and/or the user of the electronic device), the electronic device forgoes updating (1410) the display of the user interface element, such as not displaying a representation of person 1302a in user interface 1334 in
In some embodiments, while displaying the user interface (1412), in accordance with a determination that the respective person is closer than the threshold distance (e.g., 3, 5, 10, 15, etc.) from the electronic device in the physical environment, the electronic device updates (1414) display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element (e.g., whether or not the attention of the respect person is directed to the user of the electronic device). Therefore, in some embodiments, the electronic device updates display of the user interface element to indicate the presence of the respective person (e.g., as described previously) when the respective person is close, irrespective of the attention state of the respective person. The above-described manner of changing display of the user interface when a person is close to the electronic device provides a quick and efficient manner of allowing the user of the electronic device to see a close person in the physical environment of the electronic device via the display generation component, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by not requiring separate user input to indicate the presence of the respective person in the physical environment), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, determining that the attention of the respective person is directed to the user of the electronic device is based on one or more gestures performed by the respective person detected by one or more sensors (e.g., one or more optical or other cameras capturing the physical environment of the electronic device, including the respective person) in communication with the electronic device (1416), such as described with reference to
In some embodiments, determining that the attention of the respective person is directed to the user of the electronic device is based on a gaze of the respective person detected by one or more sensors (e.g., one or more optical or other cameras capturing the physical environment of the electronic device, including the respective person) in communication with the electronic device (1418), such as described with reference to
In some embodiments, determining that the attention of the respective person is directed to the user of the electronic device is based on detected speech from the respective person detected by one or more sensors (e.g., one or more microphones capturing sound from the physical environment of the electronic device, including the respective person) in communication with the electronic device (1420), such as described with reference to
In some embodiments, determining that the attention of the respective person is directed to the user of the electronic device is based on a determination that the electronic device has detected that the speech from the respective person includes a name of the user of the electronic device (1422), such as described with reference to
In some embodiments, determining that the attention of the respective person is directed to the user of the electronic device is based on a (e.g., speed of) movement of the respective person (e.g., towards the user of the electronic device) detected by one or more sensors (e.g., one or more optical or other cameras capturing the physical environment of the electronic device, including the respective person) in communication with the electronic device (1424), such as described with reference to
In some embodiments, while displaying the user interface, the electronic device detects (1426) that the user of the electronic device is engaged in an interaction with the respective person. In some embodiments, in response to detecting that the user of the electronic device is engaged in the interaction with the respective person, the electronic device updates (1428) display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element, such as described with reference to
In some embodiments, updating display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element includes (1430), in accordance with a determination that one or more first criteria are satisfied, changing display of the user interface element by a first amount (e.g., if less than a threshold number (e.g., two, three) of the multiple above-described factors (e.g., gesture(s) of the respective person, gaze of the respective person, speed and/or movement of the respective person, speech from the respective person, gesture(s) of the user, gaze of the user, speed and/or movement of the user, speech from the user) is satisfied, the electronic device optionally updates the user interface element in a first manner, such as moving the user interface element by a first amount, increasing a transparency of the user interface element by a first amount, blurring the user interface element by a first amount, updating a first portion of the user interface, and/or etc.). In some embodiments, updating (1432) display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element includes, in accordance with a determination that one or more second criteria, different from the first criteria, are satisfied, changing (1434) display of the user interface element by a second amount, different form the first amount, such as the differing displays of representation 1306b in
In some embodiments, the user interface includes a respective user interface element that at least partially obscures display, via the display generation component, of a respective object in a physical environment of the electronic device (1436), such as objects 1304a and 1306a in
In some embodiments, in accordance with a determination that the respective object is further than the respective threshold distance from the electronic device in the physical environment, and that the respective object does not satisfy the one or more first criteria, such as for object 1304a in
In some embodiments, before updating display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element, a first respective portion of the user interface surrounding a second respective portion of the user interface corresponding to the location of the respective person in the physical environment is displayed with a visual characteristic having a first value (1446) (e.g., the portion of the user interface element that would surround the representation of the respective person if the representation of the respective person were visible through the user interface element is displayed in a non-blurred state, with a first color profile, at a first brightness, at a first translucency, etc.). In some embodiments, after updating display of the user interface element to indicate the presence of the respective person in the physical environment at the location that corresponds to the user interface element, the first respective portion of the user interface surrounding the second respective portion of the user interface corresponding to the location of the respective person in the physical environment is displayed with the visual characteristic having a second value, different from the first value (1448) (e.g., the portion of the user interface element that surrounds the representation of the respective person that is visible through the user interface element is displayed in a blurred state, with a second color profile, different from the first color profile, at a second brightness, different from the first brightness, at a second translucency, different from the first translucency, etc.). For example, in some embodiments, the portions of the user interface displayed surrounding the representation of the respective person visible via the display generation component are displayed with a visual effect (e.g., blurred, more transparent, etc.). In some embodiments, the portions of the user interface surrounding the representation of the respective person are those portions of the user interface within a threshold distance (e.g., 0.5, 1, 3, 5, 20 feet) of (e.g., the boundary of) the representation of the respective person displayed via the display generation component. In some embodiments, the portions of the user interface further than the threshold distance of (e.g., the boundary of) the representation of the respective person are not displayed with the visual effect (e.g., are not altered as a result of the updating of the display of the user interface element based on the respective person, are displayed non-blurred, are displayed less transparent, etc.). In some embodiments, the visual effect applied to the user interface decreases in magnitude the as a function of the distance from (e.g., the boundary of) the representation of the respective person. The above-described manner of changing the user interface by different amounts based on distance from the representation of the respective person reduces visual clutter in the user interface, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by helping to clearly indicate the presence of the respective user as being separate from the content of the user interface), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
Device 101 optionally captures one or more images of the physical environment 1500 around device 101 (e.g., operating environment 100), including one or more objects in the physical environment 1500 around device 101. For example, in
In some embodiments, device 101 is able to detect one or more characteristics of device 101 and/or environment 1500, and in response to detecting certain characteristics, device 101 optionally reduces the level of immersion 1532 at which it displays user interface 1534 such that objects, people and/or one or more portions of physical environment 1500 become at least partially visible via display generation component 120 (e.g., when they were not visible, or not fully visible, prior to detecting those characteristics). In some embodiments, device 101 reduces the level of immersion 1532 in response to detecting (e.g., using an accelerometer, a GPS position detector, and/or other inertial measurement unit (IMU)) characteristics of the (e.g., speed of) movement of device 101 (e.g., if device is moving faster than a speed threshold, such as 0.5, 1, 3, 5, 10, 20, 50 miles per hour). In some embodiments, device 101 reduces the level of immersion 1532 in response to detecting (e.g., using a camera, microphone, etc.) characteristics of physical environment 1500 (e.g., if environment 1500 includes a sound associated with a potentially dangerous situation, such as the sound of glass breaking or the sound of a firm alarm). In some embodiments, device 101 only performs the above-described reductions of immersion if device 101 detects objects and/or hazards in environment 1500—if device 101 detects no objects and/or hazards in environment 1500, device 101 optionally does not perform the above speed- and/or sound-based reductions of immersion, even if the required speed and/or sound characteristics are detected by device 101. In some embodiments, device 101 reduces the level of immersion at which device 101 is displaying user interface 1534 to a predetermined, relatively low (e.g., minimum) level of immersion such that display of the entirety of user interface 1534 is modified (e.g., as compared to the response of device 101 in
For example, in
In
In some embodiments, device 101 returns to displaying user interface 1534 at the previous level of immersion (e.g., the level of immersion at which device 101 was displaying user interface 1534 when the immersion-reducing speed was detected, such as shown in
In some embodiments, device 101 returns to displaying user interface 1534 at the previous level of immersion (e.g., the level of immersion at which device 101 was displaying user interface 1534 when the sound 1560 was detected, such as shown in
In the method 1600, in some embodiments, an electronic device (e.g., computer system 101 in
In some embodiments, the electronic device displays a three-dimensional environment, such as a computer-generated reality (CGR) environment such as a virtual reality (VR) environment, a mixed reality (MR) environment, or an augmented reality (AR) environment, etc. that is generated, displayed, or otherwise caused to be viewable by the electronic device. In some embodiments, the user interface element is a virtual element or object (e.g., a user interface of an application on the electronic device, a virtual object displayed in the three-dimensional environment, etc.). In some embodiments, the electronic device is not displaying a three-dimensional environment, but rather simply displaying content (e.g., the user interface with the user interface element) via the display generation component. In some embodiments, the user interface element at least partially obscures display, via the display generation component, of a respective portion of a physical environment of the electronic device (1604), such as obscuring display of 1502a, 1504a, 1506a and 1508a in
In some embodiments, while displaying the user interface (1606), in accordance with a determination that one or more criteria are satisfied, including a criterion that is satisfied based on a speed of a movement of the electronic device, such as shown in
In some embodiments, in accordance with a determination that the one or more criteria are not satisfied, the electronic device forgoes updating (1610) the display of the user interface element. For example, not modifying display of the user interface and/or user interface element (e.g., maintaining the level of immersion of the electronic device). As such, the user interface element optionally continues to obscure display of the respective portion via the display generation component to the same degree as it was before. The above-described manner of selectively changing display of the user interface based on the speed of the electronic device provides a quick and efficient manner of allowing the user of the electronic device to see the physical environment of the electronic device via the display generation component, but only when likely relevant (e.g., for safety when using the electronic device), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by not unnecessarily updating display of the user interface, thus maintaining continuity of the displayed user interface and avoiding user inputs to reverse an erroneous update of the user interface), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, the one or more criteria include a criterion that is satisfied when the electronic device is not located in a vehicle (e.g., an automobile, an airplane, a boat, a train, a bus, etc.), and not satisfied when the electronic device is located in the vehicle (1612). For example, if the electronic device determines that it is located in a vehicle, then updating the display of the user interface element to reduce the degree to which the user interface element obscures display, via the display generation component, of the respective portion of the physical environment is optionally forgone even if the speed of movement of the electronic device satisfies the criterion that is satisfied based on the speed of the movement of the electronic device. In some embodiments, the speed of the movement of the electronic device that causes the updating of the display of the user interface element as described above is the speed of the movement of the electronic device with respect to its immediate surroundings in the physical environment-therefore, in some embodiments, while in a vehicle-even if the vehicle is moving at a speed that would satisfy the criterion that is satisfied based on the speed of the movement of the electronic device—the electronic device would optionally not be moving quickly relative to its immediate surroundings in the physical environment (e.g., within the vehicle), and therefore would optionally not update the display of the user interface element as described above. In some embodiments, the electronic device determines it is in a vehicle based on characteristics of its movement (e.g., moving faster than a user could move without being in a vehicle), based on image recognition of its surroundings, and/or based on communication with an electronic system within the vehicle (e.g., the infotainment system of the vehicle). In some embodiments, if the electronic device detects movement at a high speed using a first set of sensors (e.g., via IMU) but does not detect movement with respect to its immediate surroundings in physical environment using a second set of sensors (e.g., via an optical camera), the electronic device determines that it is in a vehicle or in another location in which the detected high speed of movement via the first set of sensors should not (and will not) cause updating of the user interface element as described here. The above-described manner of accounting for whether the electronic device is traveling in a vehicle provides a quick and efficient manner of avoiding unnecessarily modifying display of the user interface element based on speed in situations where the speed is not with respect to the immediate surroundings of the electronic device, which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by not unnecessarily updating display of the user interface, thus maintaining continuity of the displayed user interface and avoiding user inputs to reverse an erroneous update of the user interface), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, the one or more criteria include a criterion that is satisfied when the physical environment of the electronic device includes one or more detected hazards (1614) (e.g., stairs that the user can fall down, an object that the user can trip over, etc.). In some embodiments, the electronic device uses one or more sensors (e.g., cameras) to detect and/or identify the one or more hazards in the physical environment of the electronic device. In some embodiments, the electronic device detects and/or responds to the detected hazards in manners described with reference to method 1400 (e.g., based on proximity to the user, based on movement characteristics of the hazards, etc.). The above-described manner of selectively changing display of the user interface based on the existence of hazards in the physical environment of the electronic device provides a quick and efficient manner of allowing the user of the electronic device to see the physical environment of the electronic device via the display generation component, but only when likely relevant (e.g., for safety when using the electronic device), which simplifies the interaction between the user and the electronic device and enhances the operability of the electronic device and makes the user-device interface more efficient (e.g., by not unnecessarily updating display of the user interface when there are no hazards in the physical environment of the electronic device, thus maintaining continuity of the displayed user interface and avoiding user inputs to reverse an erroneous update of the user interface), which additionally reduces power usage and improves battery life of the electronic device by enabling the user to use the electronic device more quickly and efficiently, while reducing errors in usage.
In some embodiments, the one or more criteria include a criterion that is satisfied when the respective portion of the physical environment of the electronic device includes one or more physical objects (1616), such as in
In some embodiments, while displaying the user interface (1618), in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied based on one or more characteristics of one or more physical objects in the physical environment of the electronic device, such as described with reference to
In some embodiments, while displaying the user interface (1622), in accordance with a determination that one or more second criteria are satisfied, including a criterion that is satisfied based on a sound detected in the physical environment of the electronic device, such as sound 1560 in
In some embodiments, aspects/operations of methods 800, 1000, 1200, 1400, and 1600 may be interchanged, substituted, and/or added between these methods. For brevity, these details are not repeated here.
The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best use the invention and various described embodiments with various modifications as are suited to the particular use contemplated.
This application is a continuation of U.S. patent application Ser. No. 17/932,655, filed Sep. 15, 2022, published on Jan. 12, 2023 as U.S. Publication No. 2023-0008537, which is a continuation of U.S. patent application Ser. No. 17/448,876, filed Sep. 25, 2021, issued on Dec. 6, 2022 as U.S. Pat. No. 11,520,456, which claims the benefit of U.S. Provisional Application No. 63/083,792, filed Sep. 25, 2020, the contents of which are incorporated herein by reference in their entireties for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
1173824 | Mckee | Feb 1916 | A |
5515488 | Hoppe et al. | May 1996 | A |
5524195 | Clanton et al. | Jun 1996 | A |
5610828 | Kodosky et al. | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn et al. | May 1998 | A |
5758122 | Corda et al. | May 1998 | A |
5794178 | Caid et al. | Aug 1998 | A |
5877766 | Bates et al. | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy et al. | Nov 1999 | A |
6061060 | Berry et al. | May 2000 | A |
6078310 | Tognazzini | Jun 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas et al. | Aug 2000 | A |
6154559 | Beardsley | Nov 2000 | A |
6323846 | Westerman et al. | Nov 2001 | B1 |
6456296 | Cataudella et al. | Sep 2002 | B1 |
6570557 | Westerman et al. | May 2003 | B1 |
6584465 | Zhu et al. | Jun 2003 | B1 |
6677932 | Westerman | Jan 2004 | B1 |
6756997 | Ward et al. | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7134130 | Thomas | Nov 2006 | B1 |
7137074 | Newton et al. | Nov 2006 | B1 |
7230629 | Reynolds et al. | Jun 2007 | B2 |
7614008 | Ording | Nov 2009 | B2 |
7633076 | Huppi et al. | Dec 2009 | B2 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7657849 | Chaudhri et al. | Feb 2010 | B2 |
7663607 | Hotelling et al. | Feb 2010 | B2 |
7706579 | Oijer | Apr 2010 | B2 |
7844914 | Andre et al. | Nov 2010 | B2 |
7957762 | Herz et al. | Jun 2011 | B2 |
8006002 | Kalayjian et al. | Aug 2011 | B2 |
8239784 | Hotelling et al. | Aug 2012 | B2 |
8279180 | Hotelling et al. | Oct 2012 | B2 |
8341541 | Holecek et al. | Dec 2012 | B2 |
8381135 | Hotelling et al. | Feb 2013 | B2 |
8479122 | Hotelling et al. | Jul 2013 | B2 |
8593558 | Gardiner et al. | Nov 2013 | B2 |
8724856 | King | May 2014 | B1 |
8793620 | Stafford | Jul 2014 | B2 |
8793729 | Adimatyam et al. | Jul 2014 | B2 |
8803873 | Yoo et al. | Aug 2014 | B2 |
8866880 | Tan et al. | Oct 2014 | B2 |
8896632 | Macdougall et al. | Nov 2014 | B2 |
8947323 | Raffle et al. | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim et al. | Mar 2015 | B2 |
8994718 | Latta et al. | Mar 2015 | B2 |
9007301 | Raffle et al. | Apr 2015 | B1 |
9108109 | Pare et al. | Aug 2015 | B2 |
9185062 | Yang | Nov 2015 | B1 |
9189611 | Wssingbo | Nov 2015 | B2 |
9201500 | Srinivasan et al. | Dec 2015 | B2 |
9256785 | Qvarfordt | Feb 2016 | B2 |
9293118 | Matsui | Mar 2016 | B2 |
9316827 | Lindley et al. | Apr 2016 | B2 |
9348458 | Hotelling et al. | May 2016 | B2 |
9400559 | Latta et al. | Jul 2016 | B2 |
9448635 | Macdougall et al. | Sep 2016 | B2 |
9448687 | Mckenzie et al. | Sep 2016 | B1 |
9465479 | Cho et al. | Oct 2016 | B2 |
9491374 | Avrahami et al. | Nov 2016 | B1 |
9526127 | Taubman et al. | Dec 2016 | B1 |
9544257 | Ogundokun et al. | Jan 2017 | B2 |
9563331 | Poulos et al. | Feb 2017 | B2 |
9575559 | Andrysco | Feb 2017 | B2 |
9619519 | Dorner | Apr 2017 | B1 |
9672588 | Doucette et al. | Jun 2017 | B1 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun et al. | Jun 2017 | B2 |
9734402 | Jang et al. | Aug 2017 | B2 |
9778814 | Ambrus et al. | Oct 2017 | B2 |
9829708 | Asada | Nov 2017 | B1 |
9851866 | Goossens et al. | Dec 2017 | B2 |
9864498 | Olsson et al. | Jan 2018 | B2 |
9886087 | Wald | Feb 2018 | B1 |
9933833 | Tu et al. | Apr 2018 | B2 |
9933937 | Lemay et al. | Apr 2018 | B2 |
9934614 | Ramsby et al. | Apr 2018 | B2 |
10049460 | Romano et al. | Aug 2018 | B2 |
10203764 | Katz et al. | Feb 2019 | B2 |
10307671 | Barney et al. | Jun 2019 | B2 |
10353532 | Holz et al. | Jul 2019 | B1 |
10394320 | George-svahn et al. | Aug 2019 | B2 |
10534439 | Raffa et al. | Jan 2020 | B2 |
10565448 | Bell et al. | Feb 2020 | B2 |
10664048 | Cieplinski | May 2020 | B2 |
10664050 | Alcaide et al. | May 2020 | B2 |
10678403 | Duarte et al. | Jun 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10701661 | Coelho et al. | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall et al. | Aug 2020 | B2 |
10768693 | Powderly et al. | Sep 2020 | B2 |
10861242 | Lacey et al. | Dec 2020 | B2 |
10890967 | Stellmach et al. | Jan 2021 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell et al. | Jul 2021 | B1 |
11079995 | Hulbert et al. | Aug 2021 | B1 |
11082463 | Felman | Aug 2021 | B2 |
11112875 | Zhou et al. | Sep 2021 | B1 |
11175791 | Patnaikuni et al. | Nov 2021 | B1 |
11199898 | Blume et al. | Dec 2021 | B2 |
11200742 | Post et al. | Dec 2021 | B1 |
11232643 | Stevens et al. | Jan 2022 | B1 |
11294472 | Tang et al. | Apr 2022 | B2 |
11294475 | Pinchon et al. | Apr 2022 | B1 |
11307653 | Qian et al. | Apr 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11348300 | Zimmermann et al. | May 2022 | B2 |
11461973 | Pinchon | Oct 2022 | B2 |
11496571 | Berliner et al. | Nov 2022 | B2 |
11573363 | Zou et al. | Feb 2023 | B2 |
11574452 | Berliner et al. | Feb 2023 | B2 |
11720171 | Pastrana Vicente et al. | Aug 2023 | B2 |
11726577 | Katz | Aug 2023 | B2 |
11733824 | Iskandar et al. | Aug 2023 | B2 |
11762457 | Ikkai et al. | Sep 2023 | B1 |
12099653 | Chawda et al. | Sep 2024 | B2 |
12099695 | Smith et al. | Sep 2024 | B1 |
12113948 | Smith et al. | Oct 2024 | B1 |
12118200 | Shutzberg et al. | Oct 2024 | B1 |
20010047250 | Schuller et al. | Nov 2001 | A1 |
20020015024 | Westerman et al. | Feb 2002 | A1 |
20020044152 | Abbott et al. | Apr 2002 | A1 |
20020065778 | Bouet et al. | May 2002 | A1 |
20030038754 | Goldstein et al. | Feb 2003 | A1 |
20030151611 | Turpin et al. | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040104806 | Yui et al. | Jun 2004 | A1 |
20040243926 | Trenbeath et al. | Dec 2004 | A1 |
20050044510 | Yi | Feb 2005 | A1 |
20050073136 | Larsson et al. | Apr 2005 | A1 |
20050100210 | Rice et al. | May 2005 | A1 |
20050138572 | Good et al. | Jun 2005 | A1 |
20050144570 | Loverin et al. | Jun 2005 | A1 |
20050144571 | Loverin et al. | Jun 2005 | A1 |
20050175218 | Vertegaal et al. | Aug 2005 | A1 |
20050190059 | Wehrenberg | Sep 2005 | A1 |
20050198143 | Moody et al. | Sep 2005 | A1 |
20050216866 | Rosen et al. | Sep 2005 | A1 |
20060017692 | Wehrenberg et al. | Jan 2006 | A1 |
20060028400 | Lapstun et al. | Feb 2006 | A1 |
20060033724 | Chaudhri et al. | Feb 2006 | A1 |
20060080702 | Diez et al. | Apr 2006 | A1 |
20060156228 | Gallo et al. | Jul 2006 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20060256083 | Rosenberg | Nov 2006 | A1 |
20060283214 | Donadon et al. | Dec 2006 | A1 |
20070259716 | Mattice et al. | Nov 2007 | A1 |
20080181502 | Yang | Jul 2008 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata et al. | Mar 2009 | A1 |
20090146779 | Kumar et al. | Jun 2009 | A1 |
20090231356 | Barnes et al. | Sep 2009 | A1 |
20100097375 | Tadaishi et al. | Apr 2010 | A1 |
20100150526 | Rose et al. | Jun 2010 | A1 |
20100177049 | Levy et al. | Jul 2010 | A1 |
20100188503 | Tsai et al. | Jul 2010 | A1 |
20100269145 | Ingrassia et al. | Oct 2010 | A1 |
20110018895 | Buzyn et al. | Jan 2011 | A1 |
20110018896 | Buzyn et al. | Jan 2011 | A1 |
20110098029 | Rhoads et al. | Apr 2011 | A1 |
20110156879 | Matsushita et al. | Jun 2011 | A1 |
20110169927 | Mages et al. | Jul 2011 | A1 |
20110175932 | Yu et al. | Jul 2011 | A1 |
20110216060 | Weising et al. | Sep 2011 | A1 |
20110254865 | Yee et al. | Oct 2011 | A1 |
20110310001 | Madau et al. | Dec 2011 | A1 |
20120066638 | Ohri | Mar 2012 | A1 |
20120075496 | Akifusa et al. | Mar 2012 | A1 |
20120086624 | Thompson et al. | Apr 2012 | A1 |
20120113223 | Hilliges et al. | May 2012 | A1 |
20120124525 | Kang | May 2012 | A1 |
20120131631 | Bhogal et al. | May 2012 | A1 |
20120151416 | Bell et al. | Jun 2012 | A1 |
20120170840 | Caruso et al. | Jul 2012 | A1 |
20120184372 | Laarakkers et al. | Jul 2012 | A1 |
20120218395 | Andersen et al. | Aug 2012 | A1 |
20120256967 | Baldwin et al. | Oct 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20120272179 | Stafford | Oct 2012 | A1 |
20120290401 | Neven | Nov 2012 | A1 |
20130027860 | Masaki et al. | Jan 2013 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130148850 | Matsuda et al. | Jun 2013 | A1 |
20130169533 | Jahnke | Jul 2013 | A1 |
20130190044 | Kulas | Jul 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130222410 | Kameyama et al. | Aug 2013 | A1 |
20130229345 | Day et al. | Sep 2013 | A1 |
20130265227 | Julian | Oct 2013 | A1 |
20130271397 | Hildreth et al. | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130286004 | Mcculloch et al. | Oct 2013 | A1 |
20130293456 | Son et al. | Nov 2013 | A1 |
20130300648 | Kim et al. | Nov 2013 | A1 |
20130300654 | Seki | Nov 2013 | A1 |
20130326364 | Latta et al. | Dec 2013 | A1 |
20130335301 | Wong et al. | Dec 2013 | A1 |
20130342564 | Kinnebrew et al. | Dec 2013 | A1 |
20130342570 | Kinnebrew et al. | Dec 2013 | A1 |
20140002338 | Raffa et al. | Jan 2014 | A1 |
20140028548 | Bychkov et al. | Jan 2014 | A1 |
20140049462 | Weinberger et al. | Feb 2014 | A1 |
20140068692 | Archibong et al. | Mar 2014 | A1 |
20140075361 | Reynolds et al. | Mar 2014 | A1 |
20140108942 | Freeman et al. | Apr 2014 | A1 |
20140125584 | Xun et al. | May 2014 | A1 |
20140125585 | Song et al. | May 2014 | A1 |
20140126782 | Takai et al. | May 2014 | A1 |
20140132499 | Schwesinger et al. | May 2014 | A1 |
20140139426 | Kryze et al. | May 2014 | A1 |
20140164928 | Kim | Jun 2014 | A1 |
20140168267 | Kim et al. | Jun 2014 | A1 |
20140168453 | Shoemake et al. | Jun 2014 | A1 |
20140198017 | Lamb et al. | Jul 2014 | A1 |
20140232639 | Hayashi et al. | Aug 2014 | A1 |
20140247208 | Henderek et al. | Sep 2014 | A1 |
20140247210 | Henderek et al. | Sep 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140268054 | Olsson et al. | Sep 2014 | A1 |
20140282272 | Kies et al. | Sep 2014 | A1 |
20140285641 | Kato et al. | Sep 2014 | A1 |
20140304612 | Collin | Oct 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane et al. | Nov 2014 | A1 |
20140351753 | Shin et al. | Nov 2014 | A1 |
20140372957 | Keane et al. | Dec 2014 | A1 |
20140375541 | Nister et al. | Dec 2014 | A1 |
20150009118 | Thomas et al. | Jan 2015 | A1 |
20150035822 | Arsan et al. | Feb 2015 | A1 |
20150035832 | Sugden et al. | Feb 2015 | A1 |
20150042679 | Järvenpää | Feb 2015 | A1 |
20150067580 | Um et al. | Mar 2015 | A1 |
20150077335 | Taguchi et al. | Mar 2015 | A1 |
20150082180 | Ames et al. | Mar 2015 | A1 |
20150095844 | Cho et al. | Apr 2015 | A1 |
20150123890 | Kapur et al. | May 2015 | A1 |
20150128075 | Kempinski | May 2015 | A1 |
20150131850 | Qvarfordt | May 2015 | A1 |
20150135108 | Pope et al. | May 2015 | A1 |
20150169506 | Leventhal et al. | Jun 2015 | A1 |
20150177937 | Poletto et al. | Jun 2015 | A1 |
20150187093 | Chu et al. | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150212576 | Ambrus et al. | Jul 2015 | A1 |
20150220152 | Tait et al. | Aug 2015 | A1 |
20150227285 | Lee et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150317832 | Ebstyne et al. | Nov 2015 | A1 |
20150331240 | Poulos et al. | Nov 2015 | A1 |
20150331576 | Piya et al. | Nov 2015 | A1 |
20150332091 | Kim | Nov 2015 | A1 |
20150370323 | Cieplinski et al. | Dec 2015 | A1 |
20160012642 | Lee et al. | Jan 2016 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu et al. | Jan 2016 | A1 |
20160018900 | Tu et al. | Jan 2016 | A1 |
20160026242 | Burns et al. | Jan 2016 | A1 |
20160026243 | Bertram et al. | Jan 2016 | A1 |
20160026253 | Bradski et al. | Jan 2016 | A1 |
20160041391 | Van et al. | Feb 2016 | A1 |
20160062636 | Jung et al. | Mar 2016 | A1 |
20160093108 | Mao et al. | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160133052 | Choi et al. | May 2016 | A1 |
20160171304 | Golding et al. | Jun 2016 | A1 |
20160179191 | Kim et al. | Jun 2016 | A1 |
20160179336 | Ambrus et al. | Jun 2016 | A1 |
20160193104 | Du | Jul 2016 | A1 |
20160196692 | Kjallstrom et al. | Jul 2016 | A1 |
20160216768 | Goetz et al. | Jul 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano et al. | Sep 2016 | A1 |
20160275702 | Reynolds et al. | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160309081 | Frahm et al. | Oct 2016 | A1 |
20160313890 | Walline et al. | Oct 2016 | A1 |
20160350973 | Shapira et al. | Dec 2016 | A1 |
20160357266 | Patel et al. | Dec 2016 | A1 |
20160379409 | Gavriliuc et al. | Dec 2016 | A1 |
20170038829 | Lanier et al. | Feb 2017 | A1 |
20170038837 | Faaborg et al. | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier et al. | Feb 2017 | A1 |
20170046872 | Geselowitz et al. | Feb 2017 | A1 |
20170060230 | Faaborg et al. | Mar 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170131964 | Baek et al. | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke et al. | May 2017 | A1 |
20170146801 | Stempora | May 2017 | A1 |
20170148339 | Van Curen et al. | May 2017 | A1 |
20170153866 | Grinberg et al. | Jun 2017 | A1 |
20170206691 | Harrises et al. | Jul 2017 | A1 |
20170212583 | Krasadakis | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170236332 | Kipman et al. | Aug 2017 | A1 |
20170285737 | Khalid et al. | Oct 2017 | A1 |
20170287225 | Powderly et al. | Oct 2017 | A1 |
20170308163 | Cieplinski et al. | Oct 2017 | A1 |
20170315715 | Fujita et al. | Nov 2017 | A1 |
20170344223 | Holzer et al. | Nov 2017 | A1 |
20170358141 | Stafford et al. | Dec 2017 | A1 |
20170364198 | Yoganandan et al. | Dec 2017 | A1 |
20180024681 | Bernstein et al. | Jan 2018 | A1 |
20180045963 | Hoover et al. | Feb 2018 | A1 |
20180075658 | Lanier et al. | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia et al. | Apr 2018 | A1 |
20180095649 | Valdivia et al. | Apr 2018 | A1 |
20180101223 | Ishihara et al. | Apr 2018 | A1 |
20180114364 | Mcphee et al. | Apr 2018 | A1 |
20180150204 | Macgillivray | May 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180157332 | Nie | Jun 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey et al. | Jun 2018 | A1 |
20180181272 | Olsson et al. | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180197336 | Rochford et al. | Jul 2018 | A1 |
20180210628 | Mcphee et al. | Jul 2018 | A1 |
20180239144 | Woods et al. | Aug 2018 | A1 |
20180275753 | Publicover et al. | Sep 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov et al. | Nov 2018 | A1 |
20180322701 | Pahud et al. | Nov 2018 | A1 |
20180348861 | Uscinski et al. | Dec 2018 | A1 |
20190012060 | Moore et al. | Jan 2019 | A1 |
20190018498 | West et al. | Jan 2019 | A1 |
20190034076 | Vinayak et al. | Jan 2019 | A1 |
20190050062 | Chen et al. | Feb 2019 | A1 |
20190073109 | Zhang et al. | Mar 2019 | A1 |
20190080572 | Kim et al. | Mar 2019 | A1 |
20190088149 | Fink et al. | Mar 2019 | A1 |
20190094963 | Nijs | Mar 2019 | A1 |
20190094979 | Hall et al. | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad et al. | May 2019 | A1 |
20190130733 | Hodge | May 2019 | A1 |
20190146128 | Cao | May 2019 | A1 |
20190172261 | Alt et al. | Jun 2019 | A1 |
20190204906 | Ross et al. | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190251884 | Burns et al. | Aug 2019 | A1 |
20190258365 | Zurmoehle et al. | Aug 2019 | A1 |
20190279407 | Mchugh et al. | Sep 2019 | A1 |
20190294312 | Rohrbacher | Sep 2019 | A1 |
20190310757 | Lee et al. | Oct 2019 | A1 |
20190324529 | Stellmach et al. | Oct 2019 | A1 |
20190332244 | Beszteri et al. | Oct 2019 | A1 |
20190333278 | Palangie et al. | Oct 2019 | A1 |
20190339770 | Kurlethimar et al. | Nov 2019 | A1 |
20190346678 | Nocham | Nov 2019 | A1 |
20190346922 | Young et al. | Nov 2019 | A1 |
20190354259 | Park | Nov 2019 | A1 |
20190361521 | Stellmach et al. | Nov 2019 | A1 |
20190362557 | Lacey et al. | Nov 2019 | A1 |
20190370492 | Falchuk et al. | Dec 2019 | A1 |
20190371072 | Lindberg et al. | Dec 2019 | A1 |
20190377487 | Bailey et al. | Dec 2019 | A1 |
20190379765 | Fajt et al. | Dec 2019 | A1 |
20190384406 | Smith et al. | Dec 2019 | A1 |
20200004401 | Hwang et al. | Jan 2020 | A1 |
20200012341 | Stellmach et al. | Jan 2020 | A1 |
20200026349 | Fontanel et al. | Jan 2020 | A1 |
20200043243 | Bhushan et al. | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev et al. | Mar 2020 | A1 |
20200092537 | Sutter | Mar 2020 | A1 |
20200098140 | Jagnow et al. | Mar 2020 | A1 |
20200098173 | Mccall | Mar 2020 | A1 |
20200117213 | Tian et al. | Apr 2020 | A1 |
20200126291 | Nguyen et al. | Apr 2020 | A1 |
20200128232 | Hwang et al. | Apr 2020 | A1 |
20200129850 | Ohashi | Apr 2020 | A1 |
20200159017 | Lin et al. | May 2020 | A1 |
20200225735 | Schwarz | Jul 2020 | A1 |
20200225746 | Bar-zeev et al. | Jul 2020 | A1 |
20200225747 | Bar-zeev et al. | Jul 2020 | A1 |
20200225830 | Tang et al. | Jul 2020 | A1 |
20200226814 | Tang et al. | Jul 2020 | A1 |
20200285314 | Cieplinski et al. | Sep 2020 | A1 |
20200322178 | Wang et al. | Oct 2020 | A1 |
20200322575 | Valli | Oct 2020 | A1 |
20200356221 | Behzadi et al. | Nov 2020 | A1 |
20200357374 | Verweij et al. | Nov 2020 | A1 |
20200363867 | Azimi | Nov 2020 | A1 |
20200371673 | Faulkner | Nov 2020 | A1 |
20200387214 | Ravasz et al. | Dec 2020 | A1 |
20200387228 | Ravasz et al. | Dec 2020 | A1 |
20200387287 | Ravasz et al. | Dec 2020 | A1 |
20200410960 | Saito et al. | Dec 2020 | A1 |
20210074062 | Madonna et al. | Mar 2021 | A1 |
20210090337 | Ravasz et al. | Mar 2021 | A1 |
20210096726 | Faulkner et al. | Apr 2021 | A1 |
20210097776 | Faulkner et al. | Apr 2021 | A1 |
20210103333 | Cieplinski et al. | Apr 2021 | A1 |
20210125414 | Berkebile | Apr 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210286502 | Lemay et al. | Sep 2021 | A1 |
20210295602 | Scapel et al. | Sep 2021 | A1 |
20210303074 | Vanblon et al. | Sep 2021 | A1 |
20210303107 | Pla I Conesa et al. | Sep 2021 | A1 |
20210312684 | Zimmermann et al. | Oct 2021 | A1 |
20210319617 | Ahn et al. | Oct 2021 | A1 |
20210327140 | Rothkopf et al. | Oct 2021 | A1 |
20210339134 | Knoppert | Nov 2021 | A1 |
20210350564 | Peuhkurinen et al. | Nov 2021 | A1 |
20210350604 | Pejsa et al. | Nov 2021 | A1 |
20210365108 | Burns et al. | Nov 2021 | A1 |
20210368136 | Chalmers et al. | Nov 2021 | A1 |
20210375022 | Lee et al. | Dec 2021 | A1 |
20220011577 | Lawver | Jan 2022 | A1 |
20220011855 | Hazra et al. | Jan 2022 | A1 |
20220012002 | Bar-zeev et al. | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220070241 | Yerli | Mar 2022 | A1 |
20220083197 | Rockel et al. | Mar 2022 | A1 |
20220092862 | Faulkner et al. | Mar 2022 | A1 |
20220100270 | Pastrana Vicente et al. | Mar 2022 | A1 |
20220101593 | Rockel et al. | Mar 2022 | A1 |
20220101612 | Palangie et al. | Mar 2022 | A1 |
20220104910 | Shelton et al. | Apr 2022 | A1 |
20220121275 | Balaji et al. | Apr 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220130107 | Lindh | Apr 2022 | A1 |
20220137705 | Hashimoto et al. | May 2022 | A1 |
20220155853 | Fan et al. | May 2022 | A1 |
20220155909 | Kawashima et al. | May 2022 | A1 |
20220157083 | Jandhyala et al. | May 2022 | A1 |
20220187907 | Lee et al. | Jun 2022 | A1 |
20220191570 | Reid et al. | Jun 2022 | A1 |
20220197403 | Hughes et al. | Jun 2022 | A1 |
20220229524 | Mckenzie et al. | Jul 2022 | A1 |
20220229534 | Terre et al. | Jul 2022 | A1 |
20220232191 | Kawakami et al. | Jul 2022 | A1 |
20220245888 | Singh et al. | Aug 2022 | A1 |
20220253136 | Holder et al. | Aug 2022 | A1 |
20220253149 | Berliner et al. | Aug 2022 | A1 |
20220253194 | Berliner et al. | Aug 2022 | A1 |
20220255995 | Berliner et al. | Aug 2022 | A1 |
20220276720 | Yasui | Sep 2022 | A1 |
20220317776 | Sundstrom et al. | Oct 2022 | A1 |
20220319453 | Llull et al. | Oct 2022 | A1 |
20220326837 | Dessero et al. | Oct 2022 | A1 |
20220350463 | Walkin et al. | Nov 2022 | A1 |
20220365595 | Cieplinski et al. | Nov 2022 | A1 |
20220413691 | Becker et al. | Dec 2022 | A1 |
20220414999 | Ravasz et al. | Dec 2022 | A1 |
20230004216 | Rodgers et al. | Jan 2023 | A1 |
20230008537 | Henderson et al. | Jan 2023 | A1 |
20230021861 | Fujiwara et al. | Jan 2023 | A1 |
20230032545 | Mindlin et al. | Feb 2023 | A1 |
20230068660 | Brent et al. | Mar 2023 | A1 |
20230069764 | Jonker et al. | Mar 2023 | A1 |
20230074080 | Miller et al. | Mar 2023 | A1 |
20230086766 | Olwal et al. | Mar 2023 | A1 |
20230092282 | Boesel et al. | Mar 2023 | A1 |
20230093979 | Stauber et al. | Mar 2023 | A1 |
20230094522 | Stauber et al. | Mar 2023 | A1 |
20230100689 | Chiu et al. | Mar 2023 | A1 |
20230133579 | Chang et al. | May 2023 | A1 |
20230152889 | Cieplinski et al. | May 2023 | A1 |
20230152935 | Mckenzie et al. | May 2023 | A1 |
20230154122 | Dascola et al. | May 2023 | A1 |
20230163987 | Young et al. | May 2023 | A1 |
20230168788 | Faulkner et al. | Jun 2023 | A1 |
20230185426 | Rockel et al. | Jun 2023 | A1 |
20230186577 | Rockel et al. | Jun 2023 | A1 |
20230244857 | Weiss et al. | Aug 2023 | A1 |
20230259265 | Krivoruchko et al. | Aug 2023 | A1 |
20230273706 | Smith et al. | Aug 2023 | A1 |
20230274504 | Ren et al. | Aug 2023 | A1 |
20230308610 | Henderson et al. | Sep 2023 | A1 |
20230315270 | Hylak et al. | Oct 2023 | A1 |
20230315385 | Akmal et al. | Oct 2023 | A1 |
20230316634 | Chiu et al. | Oct 2023 | A1 |
20230316658 | Smith et al. | Oct 2023 | A1 |
20230325004 | Burns et al. | Oct 2023 | A1 |
20230333646 | Pastrana Vicente et al. | Oct 2023 | A1 |
20230350539 | Owen et al. | Nov 2023 | A1 |
20230359199 | Adachi et al. | Nov 2023 | A1 |
20230384907 | Boesel et al. | Nov 2023 | A1 |
20230388357 | Faulkner et al. | Nov 2023 | A1 |
20240086031 | Palangie et al. | Mar 2024 | A1 |
20240086032 | Palangie et al. | Mar 2024 | A1 |
20240087256 | Hylak et al. | Mar 2024 | A1 |
20240094863 | Smith et al. | Mar 2024 | A1 |
20240094882 | Brewer et al. | Mar 2024 | A1 |
20240095984 | Ren et al. | Mar 2024 | A1 |
20240103613 | Chawda et al. | Mar 2024 | A1 |
20240103676 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103684 | Yu et al. | Mar 2024 | A1 |
20240103687 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103701 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103704 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103707 | Henderson et al. | Mar 2024 | A1 |
20240103716 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240103803 | Krivoruchko et al. | Mar 2024 | A1 |
20240104836 | Dessero et al. | Mar 2024 | A1 |
20240104873 | Pastrana Vicente et al. | Mar 2024 | A1 |
20240104877 | Henderson et al. | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
20240119682 | Rudman et al. | Apr 2024 | A1 |
20240221291 | Henderson et al. | Jul 2024 | A1 |
20240272782 | Pastrana Vicente et al. | Aug 2024 | A1 |
20240291953 | Cerra et al. | Aug 2024 | A1 |
20240361835 | Hylak et al. | Oct 2024 | A1 |
20240393876 | Chawda et al. | Nov 2024 | A1 |
20240402800 | Shutzberg et al. | Dec 2024 | A1 |
20240402821 | Meyer et al. | Dec 2024 | A1 |
20240404206 | Chiu et al. | Dec 2024 | A1 |
20240411444 | Shutzberg et al. | Dec 2024 | A1 |
20240420435 | Gitter et al. | Dec 2024 | A1 |
20240428488 | Ren et al. | Dec 2024 | A1 |
20250008057 | Chiu et al. | Jan 2025 | A1 |
20250013343 | Smith et al. | Jan 2025 | A1 |
20250013344 | Smith et al. | Jan 2025 | A1 |
20250024008 | Cerra et al. | Jan 2025 | A1 |
20250028423 | Dessero et al. | Jan 2025 | A1 |
20250029319 | Boesel et al. | Jan 2025 | A1 |
20250029328 | Smith et al. | Jan 2025 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110543230 | Dec 2019 | CN |
110673718 | Jan 2020 | CN |
111641843 | Sep 2020 | CN |
109491508 | Aug 2022 | CN |
0816983 | Jan 1998 | EP |
1530115 | May 2005 | EP |
2551763 | Jan 2013 | EP |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3088997 | Nov 2016 | EP |
3249497 | Nov 2017 | EP |
3316075 | May 2018 | EP |
3451135 | Mar 2019 | EP |
3503101 | Jun 2019 | EP |
3570144 | Nov 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H06-4596 | Jan 1994 | JP |
H10-51711 | Feb 1998 | JP |
H10-78845 | Mar 1998 | JP |
2005-215144 | Aug 2005 | JP |
2005-333524 | Dec 2005 | JP |
2006-107048 | Apr 2006 | JP |
2006-146803 | Jun 2006 | JP |
2006-295236 | Oct 2006 | JP |
2011-203880 | Oct 2011 | JP |
2012-234550 | Nov 2012 | JP |
2013-196158 | Sep 2013 | JP |
2013-254358 | Dec 2013 | JP |
2013-257716 | Dec 2013 | JP |
2014-21565 | Feb 2014 | JP |
2014-59840 | Apr 2014 | JP |
2014-71663 | Apr 2014 | JP |
2014-99184 | May 2014 | JP |
2014-514652 | Jun 2014 | JP |
2015-56173 | Mar 2015 | JP |
2015-515040 | May 2015 | JP |
2015-118332 | Jun 2015 | JP |
2016-96513 | May 2016 | JP |
2016-194744 | Nov 2016 | JP |
2017-27206 | Feb 2017 | JP |
2017-58528 | Mar 2017 | JP |
2018-5516 | Jan 2018 | JP |
2018-5517 | Jan 2018 | JP |
2018-41477 | Mar 2018 | JP |
2018-106499 | Jul 2018 | JP |
6438869 | Dec 2018 | JP |
2019-40333 | Mar 2019 | JP |
2019-169154 | Oct 2019 | JP |
2019-175449 | Oct 2019 | JP |
2019-536131 | Dec 2019 | JP |
2022-53334 | Apr 2022 | JP |
10-2011-0017236 | Feb 2011 | KR |
10-2016-0012139 | Feb 2016 | KR |
10-2019-0100957 | Aug 2019 | KR |
2010026519 | Mar 2010 | WO |
2011008638 | Jan 2011 | WO |
2012145180 | Oct 2012 | WO |
2013169849 | Nov 2013 | WO |
2014105276 | Jul 2014 | WO |
2014203301 | Dec 2014 | WO |
2015130150 | Sep 2015 | WO |
2015192117 | Dec 2015 | WO |
2015195216 | Dec 2015 | WO |
2017088487 | Jun 2017 | WO |
2018046957 | Mar 2018 | WO |
2018175735 | Sep 2018 | WO |
2019067902 | Apr 2019 | WO |
2019142560 | Jul 2019 | WO |
2019217163 | Nov 2019 | WO |
2020066682 | Apr 2020 | WO |
2020247256 | Dec 2020 | WO |
2021173839 | Sep 2021 | WO |
2021202783 | Oct 2021 | WO |
2022046340 | Mar 2022 | WO |
2022055822 | Mar 2022 | WO |
2022066399 | Mar 2022 | WO |
2022066535 | Mar 2022 | WO |
2022146936 | Jul 2022 | WO |
2022146938 | Jul 2022 | WO |
2022147146 | Jul 2022 | WO |
2022164881 | Aug 2022 | WO |
2022225795 | Oct 2022 | WO |
2023096940 | Jun 2023 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, mailed on Oct. 12, 2023, 2 pages. |
Final Office Action received for U.S. Appl. No. 17/659,147, mailed on Oct. 4, 2023, 17 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071596, mailed on Apr. 8, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/071704, mailed on Aug. 26, 2022, 6 pages. |
Non-Final Office Action received for U.S Appl. No. 17/659,147, mailed on Mar. 16, 2023, 19 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,655, mailed on Apr. 20, 2023, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Apr. 7, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,876, mailed on Jul. 20, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Sep. 29, 2023, 7 pages. |
AquaSnap Window Manager: dock, snap, tile, organize [online], Nurgo Software, Available online at: <https://www.nurgo-software.com/products/aquasnap>, [retrieved on Jun. 27, 2023], 5 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, mailed on Apr. 24, 2024, 4 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/479,791, mailed on May 19, 2023, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/659,147, mailed on Feb. 14, 2024, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/465,098, mailed on Mar. 13, 2024, 3 pages. |
Corrected Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Dec. 21, 2022, 2 pages. |
European Search Report received for European Patent Application No. 21791153.6, mailed on Mar. 22, 2024, 5 pages. |
Extended European Search Report received for European Patent Application No. 23158818.7, mailed on Jul. 3, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23158929.2, mailed on Jun. 27, 2023, 12 pages. |
Extended European Search Report received for European Patent Application No. 23197572.3, mailed on Feb. 19, 2024, 7 pages. |
Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Mar. 16, 2023, 24 pages. |
Final Office Action received for U.S. Appl. No. 17/580,495, mailed on May 13, 2024, 29 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Feb. 16, 2024, 32 pages. |
Home | Virtual Desktop [online], Virtual Desktop, Available online at: <https://www.vrdesktop.net>, [retrieved on Jun. 29, 2023], 4 pages. |
International Search Report for PCT Application No. PCT/US2022/076608, mailed Feb. 24, 2023, 8 pages. |
International Search Report received for PCT Application No. PCT/US2022/076603, mailed on Jan. 9, 2023, 4 pages. |
International Search Report received for PCT Application No. PCT/US2022/076719, mailed on Mar. 3, 2023, 8 pages. |
International Search Report received for PCT Application No. PCT/US2023/017335, mailed on Aug. 22, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/018213, mailed on Jul. 26, 2023, 6 pages. |
International Search Report received for PCT Application No. PCT/US2023/019458, mailed on Aug. 8, 2023, 7 pages. |
International Search Report received for PCT Application No. PCT/US2023/060943, mailed on Jun. 6, 2023, 7 pages. |
Simple Modal Window With Background Blur Effect, Available online at: <http://web.archive.org/web/20160313233427/https://www.cssscript.com/simple-modal-window-with-background-blur-effect/>, Mar. 13, 2016, 5 pages. |
Pfeuffer et al., “Gaze + Pinch Interaction in Virtual Reality”, In Proceedings of SUI '17, Brighton, United Kingdom, Oct. 16-17, 2017, pp. 99-108. |
International Search Report received for PCT Patent Application No. PCT/US2021/050948, mailed on Mar. 4, 2022, 6 pages. |
McGill et al., “Expanding the Bounds of Seated Virtual Workspaces”, University of Glasgow, Available online at: <https://core.ac.uk/download/pdf/323988271.pdf>, [retrieved on Jun. 27, 2023], Jun. 5, 2020, 44 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071518, mailed on Feb. 25, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2021/071595, mailed on Mar. 17, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2022/013208, mailed on Apr. 26, 2022, 7 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074257, mailed on Nov. 21, 2023, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074950, mailed on Jan. 3, 2024, 9 pages. |
International Search Report received for PCT Patent Application No. PCT/US2023/074979, mailed on Feb. 26, 2024, 6 pages. |
Yamada, Yoshihiro, “How to generate a modal window with ModalPopup control”, Available online at: <http://web.archive.org/web/20210920015801/https://atmarkit.itmedia.co.jp/fdotnet/dotnettips/580aspajaxmodalpopup/aspajaxmodalpopup.html>, Sep. 20, 2021 [Search Date Aug. 22, 2023] (1 page of English Abstract, 7 pages of Official Copy Submitted). See attached Communication 37 CFR § 1.98(a)(3). |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Oct. 6, 2022, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/448,875, mailed on Sep. 29, 2023, 30 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/479,791, mailed on May 11, 2022, 18 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Dec. 11, 2023, 27 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/932,999, mailed on Feb. 23, 2024, 22 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/157,040, mailed on May 2, 2024, 25 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on May 29, 2024, 33 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 26, 2023, 29 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/305,201, mailed on May 23, 2024, 11 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/336,770, mailed on Jun. 5, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Apr. 17, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/478,593, mailed on Aug. 31, 2022, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/479,791, mailed on Mar. 13, 2023, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/479,791, mailed on Nov. 17, 2022, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Jun. 6, 2023, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/580,495, mailed on Nov. 30, 2022, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 17/650,775, mailed on Jan. 25, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/650,775, mailed on Sep. 18, 2023, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on Jan. 26, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/659,147, mailed on May 29, 2024, 13 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,655, mailed on Jan. 24, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/933,707, mailed on Mar. 6, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on May 10, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/182,304, mailed on Jan. 24, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/182,304, mailed on Oct. 2, 2023, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Apr. 11, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Feb. 1, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Oct. 30, 2023, 11 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Mar. 4, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Nov. 17, 2023, 8 pages. |
Restriction Requirement received for U.S. Appl. No. 17/932,999, mailed on Oct. 3, 2023, 6 pages. |
Search Report received for Chinese Patent Application No. 202310873465.7, mailed on Feb. 1, 2024, 5 pages (2 pages of English Translation and 3 pages of Official Copy). |
Bhowmich Shimmila, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments”, Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, Nov. 2018, 25 pages. |
Bolt et al., “Two-Handed Gesture in Multi-Modal Natural Dialog”, Uist '92, 5th Annual Symposium on User Interface Software and Technology. Proceedings of the ACM Symposium on User Interface Software and Technology, Monterey, Nov. 15-18, 1992, pp. 7-14. |
Brennan Dominic, “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared”, [online]. Road to VR, Available online at: <https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/>, [retrieved on Jun. 29, 2023], Jan. 3, 2018, 4 pages. |
Camalich Sergio, “CSS Buttons with Pseudo-elements”, Available online at: <https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/>, [retrieved on Jul. 12, 2017], Jan. 11, 2012, 8 pages. |
Chatterjee et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions”, ICMI '15, Nov. 9-13, 2015, 8 pages. |
Lin et al., “Towards Naturally Grabbing and Moving Objects in VR”, IS&T International Symposium on Electronic Imaging and the Engineering Reality of Virtual Reality, 2016, 6 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/154,757, mailed on Aug. 30, 2024, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/421,827, mailed on Aug. 29, 2024, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/463,739, mailed on Oct. 4, 2024, 2 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/935,095, mailed on Oct. 18, 2024, 3 pages. |
European Search Report received for European Patent Application No. 21801378.7, mailed on Jul. 10, 2024, 5 pages. |
Extended European Search Report received for European Patent Application No. 24159868.9, mailed on Oct. 9, 2024, 13 pages. |
Extended European Search Report received for European Patent Application No. 24178730.8, mailed on Oct. 14, 2024, 8 pages. |
Extended European Search Report received for European Patent Application No. 24178752.2, mailed on Oct. 4, 2024, 8 pages. |
Extended European Search Report received for European Patent Application No. 24179233.2, mailed on Oct. 2, 2024, 10 pages. |
Extended European Search Report received for European Patent Application No. 24179830.5, mailed on Nov. 5, 2024, 11 pages. |
Final Office Action received for U.S. Appl. No. 14/531,874, mailed on Nov. 4, 2016, 10 pages. |
Final Office Action received for U.S. Appl. No. 15/644,639, mailed on Sep. 19, 2019, 12 pages. |
Final Office Action received for U.S. Appl. No. 17/202,034, mailed on May 4, 2023, 41 pages. |
Final Office Action received for U.S. Appl. No. 17/202,034, mailed on Nov. 4, 2024, 50 pages. |
Final Office Action received for U.S. Appl. No. 17/816,314, mailed on Jan. 20, 2023, 11 pages. |
Final Office Action received for U.S. Appl. No. 17/935,095, mailed on Dec. 29, 2023, 15 pages. |
Final Office Action received for U.S. Appl. No. 18/182,300, mailed on Oct. 31, 2024, 34 pages. |
Final Office Action received for U.S. Appl. No. 18/375,280, mailed on Jul. 12, 2024, 19 pages. |
Office Action received for U.S. Appl. No. 18/157,040, mailed on Dec. 2, 2024, 25 pages. |
Search Report received for PCT Application No. PCT/US2023/060052, mailed on May 24, 2023, 6 pages. |
Search Report received for PCT Application No. PCT/US2023/074962, mailed on Jan. 19, 2024, 9 pages. |
Search Report received for PCT Application No. PCT/US2024/030107, mailed on Oct. 23, 2024, 9 pages. |
Search Report received for PCT Application No. PCT/US2024/032314, mailed on Nov. 11, 2024, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2015/029727, mailed on Nov. 2, 2015, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2021/022413, mailed on Aug. 13, 2021, 7 pages. |
Search Report received for PCT Patent Application No. PCT/US2022/076985, mailed on Feb. 20, 2023, 5 pages. |
Search Report received for PCT Patent Application No. PCT/US2023/074793, mailed on Feb. 6, 2024, 6 pages. |
Search Report received for PCT Patent Application No. PCT/US2024/026102, mailed on Aug. 26, 2024, 5 pages. |
Restarting Period for Response received for U.S. Appl. No. 15/644,639, mailed on Sep. 28, 2018, 8 pages. |
Office Action received for U.S. Appl. No. 14/531,874, mailed on May 18, 2016, 11 pages. |
Office Action received for U.S. Appl. No. 15/644,639, mailed on Apr. 12, 2019, 11 pages. |
Office Action received for U.S. Appl. No. 15/644,639, mailed on Sep. 10, 2018, 9 pages. |
Non-Final Office Action received for U.S. Appl. No. 16/881,599, mailed on Apr. 28, 2021, 8 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/123,000, mailed on Nov. 12, 2021, 8 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/202,034, mailed on Jan. 19, 2024, 44 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/202,034, mailed on Jul. 20, 2022, 38 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/580,495, mailed on Aug. 15, 2024, 28 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/816,314, mailed on Jul. 6, 2023, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/816,314, mailed on Sep. 23, 2022, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 17/935,095, mailed on Jun. 22, 2023, 15 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/154,697, mailed on Nov. 24, 2023, 10 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/322,469, mailed on Nov. 15, 2024, 34 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/473,796, mailed on Aug. 16, 2024, 21 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Aug. 26, 2024, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 14/531,874, mailed on Mar. 28, 2017, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 15/644,639, mailed on Jan. 16, 2020, 16 pages. |
Notice of Allowance received for U.S. Appl. No. 16/881,599, mailed on Dec. 17, 2021, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/123,000, mailed on May 27, 2022, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/123,000, mailed on Sep. 19, 2022, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 17/448,875, mailed on Jul. 12, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 17/816,314, mailed on Jan. 4, 2024, 6 pages. |
Notice of Allowance received for U.S. Appl. No. 17/932,999, mailed on Sep. 12, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 17/935,095, mailed on Jul. 3, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,697, mailed on Aug. 6, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,697, mailed on Dec. 3, 2024, 7 pages. |
Notice of Allowance received for U.S. Appl. No. 18/336,770, mailed on Nov. 29, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,675, mailed on Jul. 31, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/421,827, mailed on Aug. 14, 2024, 10 pages. |
Notice of Allowance received for U.S. Appl. No. 18/423,187, mailed on Jun. 5, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/463,739, mailed on Jun. 17, 2024, 9 pages. |
Notice of Allowance received for U.S. Appl. No. 18/465,098, mailed on Jun. 20, 2024, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 18/515,188, mailed on Nov. 27, 2024, 9 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 14/531,874, mailed on Jul. 26, 2017, 5 pages. |
Bohn Dieter, “Rebooting WebOS: How LG Rethought The Smart TV”, The Verge, Available online at: <http://www.theverge.com/2014/1/6/5279220/rebooting-webos-how-lg-rethought-the-smart-tv>, [Retrieved Aug. 26, 2019], Jan. 6, 2014, 5 pages. |
Fatima et al., “Eye Movement Based Human Computer Interaction”, 3rd International Conference on Recent Advances in Information Technology (RAIT), Mar. 3, 2016, pp. 489-494. |
Grey Melissa, “Comcast's New X2 Platform Moves your DVR Recordings from the Box to the Cloud”, Engadget, Available online at: <http://www.engadget.com/2013/06/11/comcast-x2-platform/>, Jun. 11, 2013, 15 pages. |
Pfeuffer et al., “Gaze and Touch Interaction on Tablets”, UIST '16, Tokyo, Japan, ACM, Oct. 16-19, 2016, pp. 301-311. |
Schenk et al., “SPOCK: A Smooth Pursuit Oculomotor Control Kit”, CHI'16 Extended Abstracts, San Jose, CA, USA, ACM, May 7-12, 2016, pp. 2681-2687. |
Corrected Notice of Allowability received for U.S. Appl. No. 17/932,999, mailed on Jan. 23, 2025, 9 pages. |
Corrected Notice of Allowability received for U.S. Appl. No. 18/174,337, mailed on Jan. 15, 2025, 2 pages. |
Extended European Search Report received for European Patent Application No. 24190323.6, mailed on Dec. 12, 2024, 9 pages. |
Final Office Action received for U.S. Appl. No. 18/473,196, mailed on Dec. 6, 2024, 22 pages. |
International Search Report received for PCT Application No. PCT/US2024/032451, mailed on Nov. 15, 2024, 6 pages. |
International Search Report received for PCT Application No. PCT/US2024/032456, mailed on Nov. 14, 2024, 6 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/149,640, mailed on Jan. 15, 2025, 17 pages. |
Non-Final Office Action received for U.S. Appl. No. 18/375,280, mailed on Nov. 27, 2024, 17 pages. |
Notice of Allowance received for U.S. Appl. No. 18/154,757, mailed on Jan. 23, 2025, 12 pages. |
Notice of Allowance received for U.S. Appl. No. 18/174,337, mailed on Jan. 2, 2025, 8 pages. |
Restriction Requirement received for U.S. Appl. No. 18/473,187, mailed on Dec. 30, 2024, 5 pages. |
Supplemental Notice of Allowance received for U.S. Appl. No. 18/515,188, mailed on Dec. 12, 2024, 2 pages. |
Number | Date | Country | |
---|---|---|---|
20240310971 A1 | Sep 2024 | US |
Number | Date | Country | |
---|---|---|---|
63083792 | Sep 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17932655 | Sep 2022 | US |
Child | 18671936 | US | |
Parent | 17448876 | Sep 2021 | US |
Child | 17932655 | US |