The present disclosure generally relates to assessing user interactions with electronic devices that involve gaze-based and other types of user activities.
Existing user interaction systems may be improved with respect to facilitating interactions based on gaze and other types of user activities.
Various implementations disclosed herein include devices, systems, and methods that assess user interactions to trigger user interface responses. In some implementations, a user interface response is triggered based on identifying a gaze-holding event (i.e., a fixation-like gaze event that is not associated with saccadic behavior). Gaze-holding events (and thus not saccade-related behaviors in the gaze data) (or acceptable portions thereof) may be used to trigger user interface responses. Using gaze-holding events to trigger user interface behavior may be advantageous, for example, because gaze directions during gaze-holding events may be more likely to correspond to a user perceiving what they are seeing and/or intentionally looking at something.
Using gaze-holding events can facilitate accurate gaze-based hover responses. For example, the user interface may be enabled to highlight a user interface icon when a user intentionally looks at the icon (e.g., similar to a mouse-hover response on a mouse-based user interface), while not highlighting the icon when the user's gaze happens to move over the icon while the user is glancing around within the user interface. Similarly, using gaze-holding events can facilitate accurate gesture-to-gaze association-based input responses. In one example, this involves associating single hand gestures, such as a pinches, gestures spreading of all five fingers on one hand, or multi-finger swipe gestures, with users intentionally gazing at user interface (UI) objects, while not associating such activities with objects that happen to be gazed upon during saccade-related or other unintentional behaviors. In another example, this involves associating multi-hand gestures, such as both hands pinching at the same time or the hands moving away from one another, with users intentionally gazing at UI objects, while not associating such activities with objects that happen to be gazed upon during saccade-related or other unintentional behaviors. In another example, this involves associating head movement, such as nodding, shaking, or tilting of the head, with users intentionally gazing at UI objects, while not associating such activities with objects that happen to be gazed upon during saccade-related or other unintentional behavior. In some implementations, a gaze is associated with one or more of a hand gesture, head gesture, torso-based gesture, arm gesture, leg gesture, or whole-body movement (e.g., associating a gaze with a combined hand/head gesture). A gaze may additionally, or alternatively, be associated with input provided via a physical device, such as a keyboard, mouse, hand-held controller, watch, etc.
In some implementations, gaze-holding events are used to associate a non-eye-based user activity, such as a hand or head gesture, with an eye-based activity, such as the user gazing at a particular user interface component displayed within a view of a three-dimensional (3D) environment. For example, a user's pinching hand gesture may be associated with the user gazing at a particular user interface component, such as a button, at around the same time (e.g., within a threshold amount of time of) as the pinching hand gesture is made. These associated behaviors (e.g., the pinch and the gaze at the button) may then be interpreted as user input (e.g., user input selecting or otherwise acting upon that user interface component). In some implementations, non-eye-based user activity is only associated with certain types of eye-based user activity that are likely to correspond to a user perceiving what they are seeing and/or intentionally looking at something. For example, it may be desirable to associate a user hand gesture with gaze-holding events corresponding to intentional/perceptive user activity. Gaze-holding events occur while a gaze holds on an object while the head is static or moving. It may be undesirable to associate a user hand gesture with a saccadic eye event that may occur reflexively rather than based on a user perceiving what they see or intentionally looking at something.
Gaze data may be examined or interpreted to identify gaze-holding events (e.g., non-saccadic eye events). The non-eye-based user activity may then be associated with one of those events, rather than being associated with a reflexive, saccadic eye event. During a saccadic event, for example, a user may look away from the user interface element that they intend to interact with for a brief period. Some implementations, ensure that non-eye-based activity (e.g., a user's hand gesture) is not associated with a saccadic event or other gaze event during which the user's gaze does not accurately correspond to the user interface or other content with which the user intends to interact.
In some implementations, eye gaze data (e.g., eye velocity data, eye acceleration data, change in gaze pose, etc.) is used to identify a subset of gaze events that only includes gaze-holding events and that excludes reflexive, saccadic events, blinks, and other eye behavior that does not correspond to a user perceiving what they are seeing and/or intentionally looking at something. Excluding saccadic events, blinks, and other eye behavior that does not correspond to a user perceiving what they are seeing and/or intentionally looking at something may improve the accuracy and/or efficiency of a system that attempts to accurately associate non-eye-based user activity with intentional user gazing (i.e., intentionally gazing at a user interface component for the purpose of providing user input corresponding to that user interface component). Thus, in some implementations, user non-eye-based activities, such as hand gestures, are only associated with gaze-holding events based on the events being more likely than non-gaze-holding events to correspond to a user perceiving what they are seeing and/or intentionally looking at something.
In some implementations, a processor performs a method by executing instructions stored on a (e.g., non-transitory) computer readable medium. The method obtains gaze motion classification data that was generated based on sensor data of an eye captured by the one or more sensors. The gaze motion classification data distinguishes gaze periods associated with gaze-holding events (e.g., intentional fixations on user interface targets) from gaze periods associated with non-gaze-holding events (e.g., gaze shifting events, blink/loss events, etc.). The gaze motion classification data may be provided by a simple gaze motion classifier (e.g., a heuristic algorithm that assesses only gaze velocity, or a more complex algorithm or machine learning model that uses more than gaze velocity). Using the gaze motion classification data may facilitate triggering user interface responses only in appropriate circumstances (e.g., only based on fixations on intentional fixations on user interface targets and not based on unintentional gaze motion (e.g., saccades, blinks, etc.)). In some implementations, gaze classification output (e.g., identifying gaze-holding events) is assessed to lock gaze during saccade, loss, and fast fixations and/or stabilize the gaze during fixation.
The method may use gaze classification data that is generated based on a gaze velocity at multiple times. The gaze data may be obtained based on sensor data of an eye captured by the sensor. For example, the gaze data may be based on a signal of live gaze velocity data obtained based on a stream of live images of the eye captured by an inward facing camera of a head-mounted device.
The gaze-holding events may be identified based on the gaze velocity. Saccadic gaze events, blinks, and/or other eye events unlikely to correspond to a user perceiving what they are seeing and/or intentionally looking at something may be excluded from the identified gaze-holding events.
The method includes detecting a user activity and triggering a user interface response based on the user activity and the gaze motion classification data. The method may include triggering a user interface response based on determining that a gaze-holding event of the gaze-holding events corresponds to a user interface element. For example, based on a gaze-holding event having a gaze direction directed at a particular user interface icon, the method may include triggering a hover-type response by the user interface (e.g., highlighting that icon as a “hovered” or “in focus” element of the user interface). In some implementations, the user activity is a gaze in a gaze direction occurring during a gaze-holding event and the user interface response comprises providing an indication of user attention to the user interface element based on determining that the gaze-holding event corresponds to the user interface element. Note that a gaze direction may correspond to a gaze direction of a single eye or a gaze direction determined based on both eyes. In one example, a gaze direction of a user's dominant eye is used in assessing user activity and triggering user interface responses.
In some implementations, the user activity is a gesture or input device interaction distinct from the gaze, the gaze-holding event is associated with the user activity, and the user interface response is triggered based on associating the user activity with the user interface element. For example, based on a gaze-holding event being directed at a particular user interface icon and an occurrence of a user activity (e.g., a pinch gesture) that is associated with the gaze-holding event, the method may include triggering a selection-type response by the user interface (e.g., triggering a selection or “clicked on” action on the user interface icon).
The method may include detecting that user activity has occurred, where the activity is distinct from the gaze-holding events (e.g., is a non-eye-based eye activity such as a pinch or other hand gesture). Examples of activity distinct from the gaze-holding events include activities that are separate from the eye, including, but not limited to, single hand gestures, multi-hand gestures, head movements, torso movements, movements of arms or legs, whole body movements, and/or interactions with other devices.
The method may include associating a gaze-holding event with the activity. Accordingly, in various implementations, a gaze-holding event is associated with one or more of a gesture made by a single hand, a gesture that involves one or more fingers, a gesture made by multiple hands, a gesture made by a head, a gesture made by hand and head positions/movements made at approximately the same time, and/or inputs to a device such as a controller, input device, wearable device, or hand-held device.
In some implementations, the method includes determining that a gaze-holding (e.g., non-saccadic) event occurred during/simultaneously with the activity (e.g., pinch) and, based on this determination, associating the gaze-holding event with the activity. Thus, a pinch that occurs while a user's gaze is associated with a button (e.g., fixed on or around a button) may be associated with that button (e.g., associating the pinch with the gazed-upon button). In some implementations, the method includes determining that a gaze-holding (e.g., non-saccadic) event did not occur during/simultaneously with the activity (e.g., pinch) and includes determining whether the activity is a valid late activity (e.g., a valid late pinch). This may be based on determining whether the late activity occurred within a threshold time of a prior gaze-holding (e.g., non-saccadic) event and, if so, associating the activity with that prior gaze-holding event. In some implementations, if no gaze-holding (e.g., non-saccadic) event occurs during/simultaneously with the activity or prior within the time threshold, then the method includes waiting to see if a gaze-holding occurs within an upcoming time period. If a new gaze-holding does occur within such a period (e.g., within a threshold time), then the method may include associating the activity with that new gaze-holding. In these examples, a non-eye-based activity, such as a pinch, that occurs during a saccade is not associated with the saccade (which is not a gaze-holding event). Instead, the non-eye-based activity, such as a pinch, may be associated with a prior or upcoming non-saccadic gaze-holding event. The associating of a non-eye-based activity with a gaze-holding event may identify an object associated with the event, such as a user interface target at which the gaze of the identified gaze-holding event is directed. Thus, the user's non-eye-based activity (e.g., pinch) can be associated with user interface components and other objects. In some implementations, content is presented to appear with in a 3D environment such as an extended reality (XR) environment, and the techniques disclosed herein are used to identify user interactions with user interface and/or other content within that 3D environment.
In some implementations, the user interface response is based on user activity (e.g., a large saccade), where the response ignores/does not use a gaze-holding event that follows the large saccade event. In one example, the user activity is a saccadic event having a characteristic that exceeds a threshold and the user interface response is based on excluding a potential gaze-holding event or a portion of a gaze-holding event occurring during a time period following the saccadic event. In some implementations, identifying a gaze-holding events comprises excluding a potential gaze-holding event or a portion of gaze-holding event occurring during a time period following a saccadic event in the velocity data, wherein the potential gaze-holding event is excluded based on: (a) an amplitude representing velocity change during saccadic event; (b) a rate of change of velocity during the saccadic event; (c) a duration of the potential gaze-holding event; or (d) gaze travel distance.
In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
People may sense or interact with a physical environment or world without using an electronic device. Physical features, such as a physical object or surface, may be included within a physical environment. For instance, a physical environment may correspond to a physical city having physical buildings, roads, and vehicles. People may directly sense or interact with a physical environment through various means, such as smell, sight, taste, hearing, and touch. This can be in contrast to an extended reality (XR) environment that may refer to a partially or wholly simulated environment that people may sense or interact with using an electronic device. The XR environment may include virtual reality (VR) content, mixed reality (MR) content, augmented reality (AR) content, or the like. Using an XR system, a portion of a person's physical motions, or representations thereof, may be tracked and, in response, properties of virtual objects in the XR environment may be changed in a way that complies with at least one law of nature. For example, the XR system may detect a user's head movement and adjust auditory and graphical content presented to the user in a way that simulates how sounds and views would change in a physical environment. In other examples, the XR system may detect movement of an electronic device (e.g., a laptop, tablet, mobile phone, or the like) presenting the XR environment. Accordingly, the XR system may adjust auditory and graphical content presented to the user in a way that simulates how sounds and views would change in a physical environment. In some instances, other inputs, such as a representation of physical motion (e.g., a voice command), may cause the XR system to adjust properties of graphical content.
Numerous types of electronic systems may allow a user to sense or interact with an XR environment. A non-exhaustive list of examples includes lenses having integrated display capability to be placed on a user's eyes (e.g., contact lenses), heads-up displays (HUDs), projection-based systems, head mountable systems, windows or windshields having integrated display technology, headphones/earphones, input systems with or without haptic feedback (e.g., handheld or wearable controllers), smartphones, tablets, desktop/laptop computers, and speaker arrays. Head mountable systems may include an opaque display and one or more speakers. Other head mountable systems may be configured to receive an opaque external display, such as that of a smartphone. Head mountable systems may capture images/video of the physical environment using one or more image sensors or capture audio of the physical environment using one or more microphones. Instead of an opaque display, some head mountable systems may include a transparent or translucent display. Transparent or translucent displays may direct light representative of images to a user's eyes through a medium, such as a hologram medium, optical waveguide, an optical combiner, optical reflector, other similar technologies, or combinations thereof. Various display technologies, such as liquid crystal on silicon, LEDs, uLEDs, OLEDs, laser scanning light source, digital light projection, or combinations thereof, may be used. In some examples, the transparent or translucent display may be selectively controlled to become opaque. Projection-based systems may utilize retinal projection technology that projects images onto a user's retina or may project virtual content into the physical environment, such as onto a physical surface or as a hologram.
In some implementations, a user interface response provided by device 110 is triggered based on identifying a gaze-holding event (i.e., a fixation-like gaze event that is not associated with saccadic behavior) based on gaze velocity. Using gaze-holding events can facilitate accurate gaze-based hover responses. For example, the device 110 may be enabled to highlight a user interface icon when a user intentionally looks at the icon, while not highlighting the icon when the user's gaze happens to move over the icon while the user is glancing around within the user interface. Similarly, using gaze-holding events, the device 110 can facilitate accurate pinch-and-gaze association-based input responses (e.g., associating pinch activities with users intentional gazing at user interface objects while not associating pinch activities with objects that happen to be gazed upon during saccade-related behaviors).
In the example of
In this example, the user interface 230 is provided in a way that combines 2D flat portions and 3D effects to provide functional and aesthetic benefits. In this example, the background portion 235 of the user interface 230 is flat. In this example, the background portion 235 includes all aspects of the user interface 230 being displayed except for the message bubbles 242, 244, 246 and new message entry portion 248 with button 250. Displaying a background portion of a user interface of an operating system or application as a flat surface may provide various advantages. Doing so may provide an easy to understand or otherwise easy to use portion of an XR environment for accessing the user interface of the application. In some implementations, multiple user interfaces (e.g., corresponding to multiple, different applications) are presented sequentially and/or simultaneously within an XR environment using flat background portions.
In some implementations, the positions and/or orientations of such one or more user interfaces are determined to facilitate visibility and/or use. The one or more user interfaces may be at fixed positions and orientations within the 3D environment. In such cases, user movements would not affect the position or orientation of the user interfaces within the 3D environment.
In other implementations, the one or more user interfaces may be body-locked content (e.g., having a distance and orientation offset that are fixed relative to a portion of the user's body (e.g., their torso)). For example, the body-locked content of a user interface could be 2 meters away and 45 degrees to the left of the user's torso's forward-facing vector. If the user's head turns while the torso remains static, a body-locked user interface would appear to remain stationary in the 3D environment at 2 m away and 45 degrees to the left of the torso's front facing vector. However, if the user does rotate their torso (e.g., by spinning around in their chair), the body-locked user interface would follow the torso rotation and be repositioned within the 3D environment such that it is still 2 m away and 45 degrees to the left of their torso's new forward-facing vector.
In other implementations, user interface content is defined at a specific distance from the user with the orientation relative to the user remaining static (e.g., if initially displayed in a cardinal direction, it will remain in that cardinal direction regardless of any head or body movement). In this example, the orientation of the body-locked content would not be referenced to any part of the user's body. In this different implementation, the body-locked user interface would not reposition itself in accordance with the torso rotation. For example, a body-locked user interface may be defined to be 2 m away and, based on the direction the user is currently facing, may be initially displayed north of the user. If the user rotates their torso 180 degrees to face south, the body-locked user interface would remain 2 m away to the north of the user, which is now directly behind the user.
A body-locked user interface could also be configured to always remain gravity or horizon aligned, such that head and/or body changes in the roll orientation would not cause the body-locked user interface to move within the 3D environment. Translational movement would cause the body-locked content to be repositioned within the 3D environment in order to maintain the distance offset.
The views 210a-c illustrate the user's gaze 260 and hand 270 gesturing occurring at successive points in time (e.g., view 210a corresponds to a first instant in time, view 210b corresponds to a second instant in time after the first instant in time, and view 210c corresponds to a third instant in time after the second instant in time). In this example, during the period of time from the first instant in time to the third instant in time, the user intends to provide user input selecting button 250 by gazing at the button 250 (i.e., directing their gaze direction 260 at button 250) while simultaneously (e.g., within a threshold amount of time of) making a pinching gesture with hand 270. The user understands that this type of input (e.g., simultaneously gazing at a user interface object such as button 250 while making a pinching hand gesture) will be interpreted as input corresponding to the gazed-at user interface object.
However, while attempting to do so, in this example, the user 102 experiences an involuntary saccade, looking away from the button 250 at the second instant in time when the pinch occurs. Thus, at the first instant in time illustrated in view 210a, the user 102 has not yet pinched and is gazing at the button 250. At the second instant in time illustrated in view 210b, the user 102 has pinched hand 270 but the involuntary, reflexive saccade occurs and thus the gaze 260 is directed at the depiction 220 of the desk 120 rather than at the button 250. This gaze direction does not correspond to the user's intent or what the user is perceiving. At the third instant in time illustrated in view 210c, the user 102 is no longer pinching hand 270 and the saccade has ended with the gaze 260 returning to the button 250.
Some implementations disclosed herein assess user gaze data (e.g., gaze velocity, to identify types of eye events that should be associated with non-eye-based activity versus types of eye events that should not be associated with non-eye-based activity). Some implementations, attempt to distinguish gaze-holding events (i.e., eye gaze events associated with a user intentionally gazing at an object and/or perceiving what they are seeing) from other gaze events (e.g., saccades, blinks, etc.) in which the user in not intentionally gazing at an object and/or perceiving what they are seeing.
In the example of
Based on determining that the pinch did not occur during/simultaneously with a gaze-holding event, the device 110 may attempt to associate the pinch with a prior or future gaze-holding event. For example, the device 110 may determine that the pinch (at the second instant in time illustrated in view 210b) occurred within a predetermined threshold amount of time following an identified gaze-holding event (e.g., occurring at the first instant in time illustrated in view 210a). For example, the threshold may be a 1 ms, 2 ms, 10 ms, 50 ms, etc. threshold. If the first instant in time (illustrated in view 210a) and the second instant in time (illustrated in view 210b) occurred within 1 ms, 2 ms, 10, ms, etc. of one another, then the pinch occurring at the second instant in time (illustrated in view 210b) is associated with the gaze-holding event (i.e., the user 102 gaze direction 260 being directed to button 250) at the first instant in time (illustrated in view 210a).
If, on the other hand, the first instant in time (illustrated in view 210a) and the second instant in time (illustrated in view 210b) do not occur within the threshold (e.g., 1 ms, 2 ms, 10, ms, etc.) of one another, then the pinch occurring at the second instant in time (illustrated in view 210b) is not associated with the gaze-holding event that occurred at the first instant in time (illustrated in view 210a). If no prior gaze-holding event occurred within the threshold amount of time, then the device 110 may wait as new gaze data is received and assess such data to determine if a new gaze event occurs following the pinch that occurred at the second instant in time (illustrated in view 210b). For example, the device 110 may determine that the pinch (at the second instant in time illustrated in view 210b) occurred within a predetermined threshold amount of time before an identified gaze-holding event (e.g., occurring at the third instant in time illustrated in view 210c). For example, the threshold may be a 1 ms, 2 ms, 10 ms, 50 ms, etc. threshold. If the third instant in time (illustrated in view 210c) and the second instant in time (illustrated in view 210b) occurred within 1 ms, 2 ms, 10, ms, etc. of one another, then the pinch occurring at the second instant in time (illustrated in view 210b) is associated with the gaze-holding event (i.e., the user 102 gaze direction 260 being directed to button 250) at the third instant in time (illustrated in view 2c). The threshold amounts of time used to assess prior gaze-holding events or wait for new gaze-holding events may be the same or may be different from one another.
If no new gaze-holding event occurs within the threshold amount of time, then the device 110 may determine that the pinch occurring at that second instant in time (illustrated in view 210b) should not be associated with any gaze events. In other words, if no valid gaze-holding event occurs in a window of time before and after a given non-eye-based user activity, the device 110 may determine to not associate that activity with any eye-based activity. The non-gaze-based activity (e.g., a pinch) may still be interpreted as input, but will not be associated with a gaze event/direction. In some implementations, a given input type (e.g., a pinch) is interpreted a first way when associated with an eye-based event and another way when not associated with an eye-based event (e.g., the device 110 performs an alternative action or forgoes performing any action). In some implementations, a non-eye-based activity, such a pinch, is not treated as input unless associated with a gaze event. In some implementations, device 110 presents visual or audible output asking the user 102 for clarification or further input when a non-eye-based activity cannot be associated with a valid gaze-holding event/user interface object.
Note that in some implementations a pinch is determined to occur when a pinch is made (e.g., when the fingers first make contact). In some implementations a pinch is released (e.g., when the fingers separate from one another). In some implementations, a pinch occurs during a period of time during which fingers are touching (e.g., the period between when the fingers first make contact and when the fingers separate from one another). Various implementations disclosed herein may associate a gaze-holding event with a pinch that is determined to occur based on gaze activity at the time at which a pinch is initially made, the period during which the fingers are touching, and/or the time at which the fingers separate.
In
Such hand gestures may be recognized by a device using one or more sensors of various types. For example, an image sensor may capture a sequence of images that may be interpreted to identify an object (e.g., hand) and its movement path, configuration (e.g., whether fingers are touching/pinching or not), etc.
In
In the example of
The user's gaze may additionally, or alternatively, be associated with a gesture as illustrated in
The eye and hand activities of
In
Based on this activity, the selected user interface object 325 is moved. In this example, the direction and distance that the selected user interface object moves are based on the direction and distance that the hand moves. In some implementations, the direction of user interface object movement is constrained to a direction on a defined 2D plane (e.g., a direction on the 2D plane upon which user interface elements are displayed such as on a virtual screen a few feet in front of the user). For example, the direction of the UI object movement may be constrained to a direction that most closely corresponds to the 3D direction of the hand's movement. In some implementations, the amount of movement/distance is scaled (e.g., 1 inch of hand movement corresponds to 2 inches of UI object movement, 4 inches of UI object movement, 1 foot of UI object movement, etc.).
In illustration 305e, the user breaks (e.g., releases) the pinch that was made in illustration 305c and maintained during the movement of illustration 305d. In illustration 305e, the hand engagement user input (breaking the pinch) is treated as input without requiring and/or using any associated gaze or other eye data. The input is simply the separation of the fingers that had been pinched together. In this example, the pinch break of illustration 305e is interpreted as ending the movement of the UI object 325 (i.e., the UI object 325 stops moving based on the movement of the hand once the pinch is broken, even if the hand continues moving (e.g., leftward)). In this example, gaze may not be used during release of a pinch. In other examples, gaze (e.g., identification of gaze-holding events) may be used during pinch release, for example, to identify a position on a user interface at which an action is to be associated.
The hand gestures of illustrations 305d-e may be recognized by a device using one or more sensors of various types. For example, an image sensor may capture a sequence of images that may be interpreted to identify an object (e.g., hand) and its movement path, configuration (e.g., when fingers touch/pinch, when fingers stop touching/pinching), etc.
In
In
In
At block 520, pose stabilization and saccade rejection are applied to the gaze data and/or gaze classifications. The pose stabilization may adjust pose (e.g., position and orientation) for eye twitch and/or small eye movements that do not correspond to intentional/perceptive user eye movements. The saccade rejection may involve may use gaze confidence, tracking state, pupil center, pupil diameter, inter-pupillary distance (IPD), gaze ray data, and velocity data to detect saccades and blinks for removal and/or identify fixations for gaze interactions. It may distinguish between fixations and saccades to facilitate more accurate gaze-based input. The saccade rejection may involve identifying eye gaze events that correspond to involuntary/reflexive eye saccades and removing (e.g., filtering out) those events (e.g., altering the gaze data to remove gaze data corresponding to those types of gaze events).
At block 530 (hit test manager), the eye gaze data (e.g., eye gaze-holding events identified within the stabilized and saccade removed eye gaze data) is assessed along with user interface collision data 540 to identify eye events corresponding to particular user interface elements. For example, a user interface on a virtual 2D surface or within a 3D region may be presented within a field of view of a 3D environment. Gaze directions of gaze-holding events within that 3D environment may be assessed relative to the user interface elements (e.g., to identify when gaze directions of the gaze-holding events intersect with (or are close to) particular user interface elements). For example, this may involve determining that the user is gazing at a particular user interface element at a particular point in time when a gaze-holding event is occurring.
In some implementations, a hit testing process is utilized that may use gaze ray data, confidence data, gesture data (e.g., hand motion classification), fixation cluster spread data, etc. to loosen/tighten a gaze area based on precision of gaze tracking and/or user behavior. This process may utilize UI geometry data, for example, from simulation system that is based on UI information provided by applications, e.g., identifying interaction targets (e.g., which UI elements to associate with a given user activity).
At block 560 (pinch & gaze association), hand data 550 is associated with the gaze-holding events and associated user interface elements identified at block 530 (by the hit test manager). This may involve determining that a hand gesture that occurs at a particular instant in time or during a particular period of time should be associated with a particular gaze-holding event and its associated user interface element. As described herein, such association may be based on timing and/or other criteria.
At block 570 (interaction state manager), the hand data 550 associated with gaze-holding events and associated user interface element is used to manage interactions. For example, user input events may be provided to an application that is providing a user interface so that the application can respond to the user input events (e.g., by changing the user interface). The user input events may identify the user interface element that a given input is associated with (e.g., identifying that the user has provided gaze-plus-pinch input selecting element A, that the user has provided pinch input moving 10 distance units (e.g., in cm, m, km, inches, feet, miles, etc.) to the left, that the user has released a pinch, etc. User input is thus recognized and used to trigger interaction state updates, in
Pinches occurring during the gaze-holding event 706 are associated with the gaze event 708 with which there is a corresponding user interface element. Thus, the first pinch 702a (from time 712 to time 713) is associated with this gaze event 708 (and its corresponding UI element). Pinch 702a is identified and, based on it occurring during gaze holding event 706, the pinch 702a is associated with the user interface element corresponding to gaze event 708, e.g., the pinch is sent as illustrated at marker 720. Similarly, the second pinch 702b (from time 716 to time 717) is also associated with the gaze-holding event 706 (and its corresponding UI element). Pinch 702b is identified and, based on it occurring during the gaze holding event 706, the pinch 702b is associated with the user interface element corresponding to gaze event 708, e.g., the pinch is determined to have occurred and is sent immediately as illustrated at marker 730.
In this example, a pinch 810 occurs during the saccade while the gaze is outside of the user interface element 815. However, the pinch 810 is not associated with a saccadic instant. Instead, the pinch 810 is associated with a valid target. In this instance, the process selects a valid target to associate using criteria, e.g., selecting the last valid target if such a target occurred within a previous threshold amount of time. In this example, a last valid target is selected, which is the location associated with the gaze event 820d. In this way, unintentional gaze motion (e.g., saccades, blinks, etc.) are not considered for association, since they are removed from the signal and not included in the valid fixation targets.
At block 902, the method 900 includes obtaining gaze data comprising gaze velocity at multiple times, the gaze data obtained based on sensor data of an eye captured by the one or more sensors. For example, the gaze data may be based on a signal of live gaze velocity data obtained based on a stream of live images of the eye captured by an inward facing camera of a head-mounted device (HMD).
At block 904, the method 900 includes identifying gaze-holding events based on the gaze velocity. Identifying the gaze-holding events may involve motion classification, pose stabilization, and/or blink removal. In some implementations, a gaze velocity signal is used to classify eye/gaze motion. Gaze velocity and/or position data may be stabilized, for example, to account for eye twitching and micro-eye movements not associated with voluntary or conscious behavior. In some implementations, an event rejection process is performed to remove gaze events that are associated with saccades, blinks, and other events with which user intentional and/or conscious interactions are not likely to be related.
At block 906, the method 900 includes triggering a user interface response based on determining that a gaze-holding event of the gaze-holding events corresponds to a user interface element. For example, based on a gaze-holding event having a gaze direction directed at a particular user interface icon, the method may include triggering a hover-type response by the user interface (e.g., highlighting that icon as a “hovered” or “in focus” element of the user interface). In another example, based on a gaze-holding event being directed at a particular user interface icon and an occurrence of a user activity (e.g., a pinch gesture) that is associated with the gaze-holding event, the method may include triggering a selection-type response by the user interface (e.g., triggering a selection or “clicked on” action on the user interface icon).
The method may include detecting that an activity has occurred, where the activity is distinct from the gaze-holding events. The activity may be a non-eye-based eye activity such as a pinch or other hand gesture. Examples of activity distinct from the gaze-holding events include activities that are separate from the eye, including, but are not limited to, single hand gestures, multi-hand gestures, head movements, torso movements, movements with arms or legs, whole body movements, and/or interactions with devices.
Single hand gestures include, but are not limited to, a user forming a shape/configuration and/or making a particular motion with a single hand, for example by pinching (e.g., touching a pointer or other finger to a thumb), grasping (e.g., forming hand into a ball shape), pointing (e.g., by extending one or more fingers in a particular direction), or performing a multi-finger gesture. One example of a hand gesture involves a user pinching where the pinching (e.g., touching finger to thumb and then releasing) provides input (e.g., selection of whatever the user is gazing upon). Another example of a hand gesture involves a user pinching (e.g., to initiate detection of the gesture) followed by a movement or change to the hand while the pinching is maintained (e.g., pinching and then moving the hand to provide a directional input movement based on the direction of the movement of the hand).
One example of a multi-finger gesture is a user spreading all fingers apart (e.g., configuring the hand so that no finger touches any other finger). Another example of a multi-finger gesture is a multi-finger swipe (e.g., extending two or more fingers and moving those fingers along a particular path or across a particular real or virtual surface). Another example of a multi-finger gesture is a hand-held approximately flat with fingers all touching adjacent fingers. Another example of a multi-finger gesture is two fingers extended in a peace-sign configuration. Another example of a multi-finger gesture is all fingers extending straight from the palm and then bent at their respective knuckles. Another example of a multi-finger gesture is the thumb touching two or more of the fingers' tips in a particular sequence (e.g., first touching the pointer finger then touching the pinky finger). Another example of a multi-finger gesture is fingers held in a particular configuration (e.g., pointer touching middle finger, middle finger not touching ring finger, ring finger touching pinky finger while the whole hand moves along a particular path (e.g., up and down)).
Multi-hand gestures include, but are not limited to, a user forming a shape/configuration and/or making a particular motion with both hands simultaneously or within a threshold amount of time of one another (e.g., within a 2 second time window). One example of a multi-hand gesture involves a user pinching both hands where the pinching (e.g., touching finger to thumb and then releasing on both hands within a threshold amount of time) provides input (e.g., a particular interaction with whatever the user is gazing upon). Another example of a multi-hand gesture involves a user pinching with both hands within a threshold amount of time of one another (e.g., to initiate detection of the gesture) followed by a movement or change to one or both of the hands while the pinching is maintained (e.g., (a) pinching both hands and then moving the hands towards or apart from one another to provide a zoom in or zoom out input, (b) pinching both hands and then moving both hands left, right, up, down, etc. simultaneously and together to provide a panning input in the direction of movement, or (c) pinching both hands and the moving the hands in a way that maintains the distance between hands while changing their relative positioning to provide rotation input based on the change (e.g., as if holding a string between the hands and rotating the string to provide corresponding rotation input to a user interface element)).
Multi-hand gestures may involve each hand performing a gesture, for example, by pinching (e.g., touching a pointer or other finger to a thumb), grasping (e.g., forming hand into a ball shape), pointing (e.g., by extending one or more fingers in a particular direction), or performing a multi-finger gesture. In one example, a multi-hand gesture is provided (or initiated) by both hands pinching at the same time (e.g., within a threshold time of one another). In one example, a combined (e.g., multi-hand) gesture is based on the timing between two initiation actions (e.g., pinches performed by each hand) and/or the hands proximity to one another.
Head gestures may involve a movement of the head with respect to a degree of freedom (e.g., translating, rotating, etc.). Head movement may involve, but is not limited to, a head nodding, shaking, or tilting.
User activity to be associated with a gaze direction may involve user input provided via a device (e.g., a device separate from the HMD or other device that is sensing the user's gaze direction). Such a device may be an input device such as a keyboard, mouse, VR controller, ring, a wearable device such as a watch, a hand-held device such as a phone, tablet, or laptop, or any other type of device capable of interaction or user input.
User activity may involve a user using a hand to interact with a controller or other input device, pressing a hot key, nodding their head, turning their torso, making a facial expression, jumping, sitting, or any other activity performed by a user separate from the user's eye gaze. The activity may be detected based on sensor data (e.g., from an outward facing camera) or based on input device data. The activity may be static (e.g., a user holding a hand steady in a particular configuration), or non-static (e.g., a user making a particular motion such as moving a hand while holding a pinch hand configuration).
In some implementations, a physical keyboard device has one or more keys that correspond to particular gestures. Such keys may be used along with gaze information to interpret user activity. For example, a user may type on the keyboard to provide keyboard based input (e.g., entering characters such as “a,” “b,” “?,” etc.) and at or around the same time also use gaze to provide input. The user may gaze at a position on a text entry window and select a “pinch” key on the keyboard to initiate an action at the gazed-upon location. Thus, the user may be able to utilize the keyboard and provide gaze/pinch type input without needing to remove their hand(s) from the keyboard position (e.g., without reaching off of or away from the keyboard to interact with a mouse, trackpad, or make a spatially-separated/off keyboard pinching gesture). Rather the user's hands may remain in place hovering above the keyboard. In other implementations, a user is enabled to make a pinching gesture with their hands on or just above a keyboard rather than or in addition to using a pinch key on the keyboard. In some implementations, a device such as a keyboard has a dedicated or assigned key (or button) that corresponds to a pinch (or equivalent) interaction of a pinch and gaze interaction. An HMD may display distinguishing visual characteristics around such a key or button so that the user recognizes its special functionality. Similarly, special or otherwise distinguishing sounds may be presented when such a key is used to further emphasize or distinguish the key or button from other functions on the keyboard or other physical input device.
The method 900 may associate a gaze-holding event with the activity. Accordingly, in various implementations, a gaze-holding event is associated with one or more of a gesture made by a single hand, a gesture that involves one or more fingers, a gesture made by multiple hands, a gesture made by a head, a gesture made by hand and head positions/movements made at approximately the same time, and/or inputs to a device such as a controller, input device, wearable device, or hand-held device.
Associating the gaze-holding event with the activity may be based on determining that activity occurred during the gaze-holding event. Associating the gaze-holding event with the activity may involve determining that the activity did not occur during any of the gaze-holding events and determining that the activity occurred within a threshold time after the gaze-holding event. Associating the gaze-holding event with the activity may involve determining that the activity did not occur during any of the gaze-holding events, determining that the activity did not occur within a threshold time after any of the gaze-holding events, and determining that the gaze-holding event occurred within a threshold time after the activity.
In some implementations, the method 900 includes associating a gaze-holding event with another user activity (e.g., a pinch) during the presentation of content on a device such as an HMD. In such implementations, the gaze-holding event may be associated with the user gazing at a portion of the content that is being presented and thus the association may associate the other user activity (e.g., the pinch) with that portion of the content. In some implementations, the content is provided within a view of a 3D environment such as an XR environment. In some implementation, the view comprises only virtual content. In some implementations, the view comprises mixed reality or augmented reality content. In some implementations, at least a portion of 3D environment depicted in the view corresponds to a physical environment proximate the device (e.g., via passthrough video or via a see-through (e.g., transparent) portion of the device). In some implementations, a 3D environment is not presented. For example, a user's gaze-holding events may be associated with input that is received while the user gazes at and provides activity that is input to a device such as a smart phone or tablet (i.e., a device that does not present 3D content or use stereoscopic display to display content at different depths).
Gaze velocity data may be assessed or filtered in a way that accounts for fast fixation inaccuracies such as those associated with short duration gaze-holding events that occur following significant gaze-shifting/saccadic events. For example, the method may include ignoring potential gaze-holding events that have a short duration and that follow a gaze shifting/saccadic event in which the gaze has shifted more than a threshold amount or at more than threshold rate (e.g., based on absolute gaze directional change amount or gaze velocity associated with a saccadic event). In some implementations, gaze-holding events that are identified based on gaze velocity exclude potential gaze-holding events occurring during a time period following a saccadic event in the velocity data where the saccadic event has an amplitude greater than a threshold. In some implementations, gaze-holding events that are identified based on gaze velocity exclude potential gaze-holding events occurring during a time period following a saccadic event in the velocity data where the saccadic event has velocity that is greater than a threshold velocity and/or changes at a rate that is greater than a threshold rate of change. Furthermore, in some additional or alternative implementations, gaze-holding events that are identified based on gaze travel exclude potential gaze-holding events occurring during a time period following a saccadic event in the eye tracking data where the saccadic event has a distance of eye travel that is greater than a threshold distance.
Gaze-holding events or portions thereof may be rejected from consideration with respect to providing user interface responses based on various criteria. In some implementations, this involves rejecting the gaze-holding (e.g., a portion of a gaze-holding event) for a period in the beginning of the gaze-holding (and accepting the rest) even when the gaze-holding event might take longer than the rejection period. A user may saccade and land on a target next to their intended target, and then from there drift slowly to the intended target. In this case, the time from landing on the neighbor target and the drifting is rejected, but the portion of the gaze-holding event occurring after landing on the intended target is accepted.
Some implementations reduce or prevent gaze flicker (e.g., preventing UI highlighting in response to gaze when it is not yet settled). Some implementations utilize rules to exclude initial portions of a gaze holding event, e.g., after certain large saccadic motions, to allow gaze to settle before shifting focus from a previous UI element to a new UI element on which the gaze has now settled. Some implementations use such an exclusion for only some types of actions, e.g., exclusions only applicable to gaze highlighting. Thus, in this example, if the user pinches during that initial gaze holding event—the pinch action will be sent immediately and associated with the new UI element on which gaze is now holding. The exclusion may be time based, e.g., after small saccades excluding the first 44 ms of gaze holding and after large saccades excluding the first 88 ms. In some implementations, at least a potential gaze-holding event occurring during a time period following a saccadic event is excluded, where the potential gaze-holding event is excluded based on (a) an amplitude representing an amount of velocity change during the saccadic event, (b) a rate of change of the velocity during the saccadic event, (c) a duration of the potential gaze-holding event, and/or (d) distance of eye travel during the saccadic event.
In some implementations, a potential gaze-holding event is excluded based on it occurring between two saccadic events having one or more particular characteristics such as those described above. For example, in the case where there is a large saccade, a short intermediate gaze-holding event and then another large saccade, the intermediate gaze-holding event may be rejected.
In some implementation, a small saccade following one or more large saccades that might be erroneously classified as a gaze-holding event is correctly characterized (i.e., as a small saccade rather than a gaze-holding event) based on determining that it follows a saccadic event having one or more particular characteristics such as those described above. Similarly, a gaze classifier may classify gaze data associated with a continuous saccade by falsely identifying a gap (and thus classifying the second portion of the saccade as a gaze-holding event). Such an erroneous classification may be correctly characterized (e.g., as a saccadic event rather than a gaze-holding event) based on determining that it follows a saccadic event having one or more particular characteristics such as those described above.
Excluding potential gaze events in such circumstances may be beneficial because when a user makes a large eye movement (e.g., a saccade of large amplitude), the eye may not go as quickly to an intended gaze target as in other circumstances. When the eye makes a large eye movement, it often does not land exactly where the user intends (e.g., on an intended user interface target). Often, the eyes naturally land around the general area (not on it exactly) and then move and adjusts to the exact location of the intended user interface element following subsequent gaze-holding event(s).
The system may exclude the one or more initial gaze-holding events (e.g., due to velocity, distance of eye travel, time-proximity to a significant gaze shifting event, etc.) following a significant gaze shift (e.g., high-amplitude saccadic event). For example, following a blink or gaze loss, an initial portion (e.g., 88 ms) of a gaze holding event may be excluded. The system may interpret a later gaze-holding event as the appropriate gaze-holding event to use to identify gaze direction in a triggered user interface response. After a saccade, for example, averaging may be used. For example, this may involve using a simple 4-tap averaging process to the 1-sample difference of the gaze during saccade, (e.g., a distance equivalent to the distance traveled during the last 4 frames divided by 4). In one example, if the average is less than a threshold, when the saccade finishes, the exclusion period may be 44 ms, otherwise it is 88 ms. Different time thresholds may of course be used.
Excluding gaze-holding events that occur soon after such large eye movements may thus help ensure that a gaze-holding event is only used to trigger user interface responses in circumstances in which the gaze-holding event is likely to have a gaze direction that corresponds to an intended gaze target.
In some implementations, the one or more communication buses 1004 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors 1006 include at least one of an inertial measurement unit (IMU), an accelerometer, a magnetometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like.
In some implementations, the one or more output device(s) 1012 include one or more displays configured to present a view of a 3D environment to the user. In some implementations, the one or more displays correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electromechanical system (MEMS), and/or the like display types. In some implementations, the one or more displays correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. In one example, the device 1000 includes a single display. In another example, the device 1000 includes a display for each eye of the user.
In some implementations, the one or more output device(s) 1012 include one or more audio producing devices. In some implementations, the one or more output device(s) 1012 include one or more speakers, surround sound speakers, speaker-arrays, or headphones that are used to produce spatialized sound, e.g., 3D audio effects. Such devices may virtually place sound sources in a 3D environment, including behind, above, or below one or more listeners. Generating spatialized sound may involve transforming sound waves (e.g., using head-related transfer function (HRTF), reverberation, or cancellation techniques) to mimic natural soundwaves (including reflections from walls and floors), which emanate from one or more points in a 3D environment. Spatialized sound may trick the listener's brain into interpreting sounds as if the sounds occurred at the point(s) in the 3D environment (e.g., from one or more particular sound sources) even though the actual sounds may be produced by speakers in other locations. The one or more output device(s) 1012 may additionally or alternatively be configured to generate haptics.
In some implementations, the one or more image sensor systems 1014 are configured to obtain image data that corresponds to at least a portion of a physical environment. For example, the one or more image sensor systems 1014 may include one or more RGB cameras (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), monochrome cameras, IR cameras, depth cameras, event-based cameras, and/or the like. In various implementations, the one or more image sensor systems 1014 further include illumination sources that emit light, such as a flash. In various implementations, the one or more image sensor systems 1014 further include an on-camera image signal processor (ISP) configured to execute a plurality of processing operations on the image data.
The memory 1020 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, the memory 1020 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 1020 optionally includes one or more storage devices remotely located from the one or more processing units 1002. The memory 1020 comprises a non-transitory computer readable storage medium.
In some implementations, the memory 1020 or the non-transitory computer readable storage medium of the memory 1020 stores an optional operating system 1030 and one or more instruction set(s) 1040. The operating system 1030 includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the instruction set(s) 1040 include executable software defined by binary information stored in the form of electrical charge. In some implementations, the instruction set(s) 1040 are software that is executable by the one or more processing units 1002 to carry out one or more of the techniques described herein.
The instruction set(s) 1040 include user action tracking instruction set(s) 1042 configured to, upon execution, associate user activity with gaze-holding events as described herein. The instruction set(s) 1040 may be embodied as a single software executable or multiple software executables.
Although the instruction set(s) 1040 are shown as residing on a single device, it should be understood that in other implementations, any combination of the elements may be located in separate computing devices. Moreover, the figure is intended more as functional description of the various features which are present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. The actual number of instructions sets and how features are allocated among them may vary from one implementation to another and may depend in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
It will be appreciated that the implementations described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and sub combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
As described above, one aspect of the present technology is the gathering and use of sensor data that may include user data to improve a user's experience of an electronic device. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies a specific person or can be used to identify interests, traits, or tendencies of a specific person. Such personal information data can include movement data, physiological data, demographic data, location-based data, telephone numbers, email addresses, home addresses, device characteristics of personal devices, or any other personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to improve the content viewing experience. Accordingly, use of such personal information data may enable calculated control of the electronic device. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information and/or physiological data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates implementations in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware or software elements can be provided to prevent or block access to such personal information data. For example, in the case of user-tailored content delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide personal information data for targeted content delivery services. In yet another example, users can select to not provide personal information, but permit the transfer of anonymous information for the purpose of improving the functioning of the device.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences or settings based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publicly available information.
In some embodiments, data is stored using a public/private key system that only allows the owner of the data to decrypt the stored data. In some other implementations, the data may be stored anonymously (e.g., without identifying and/or personal information about the user, such as a legal name, username, time and location data, or the like). In this way, other users, hackers, or third parties cannot determine the identity of the user associated with the stored data. In some implementations, a user may access their stored data from a user device that is different than the one used to upload the stored data. In these instances, the user may be required to provide login credentials to access their stored data.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing the terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general-purpose computing apparatus to a specialized computing apparatus implementing one or more implementations of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or value beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
The foregoing description and summary of the invention are to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined only from the detailed description of illustrative implementations but according to the full breadth permitted by patent laws. It is to be understood that the implementations shown and described herein are only illustrative of the principles of the present invention and that various modification may be implemented by those skilled in the art without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/409,147 filed Sep. 22, 2022, and which claims the benefit of U.S. Provisional Patent Application No. 63/453,506 filed Mar. 21, 2023, each of which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5515488 | Hoppe | May 1996 | A |
5524195 | Clanton, III | Jun 1996 | A |
5610828 | Kodosky | Mar 1997 | A |
5737553 | Bartok | Apr 1998 | A |
5740440 | West | Apr 1998 | A |
5751287 | Hahn | May 1998 | A |
5758122 | Corda | May 1998 | A |
5794178 | Caid | Aug 1998 | A |
5877766 | Bates | Mar 1999 | A |
5900849 | Gallery | May 1999 | A |
5933143 | Kobayashi | Aug 1999 | A |
5990886 | Serdy | Nov 1999 | A |
6061060 | Berry | May 2000 | A |
6108004 | Medl | Aug 2000 | A |
6112015 | Planas | Aug 2000 | A |
6154559 | Beardsley | Nov 2000 | A |
6456296 | Cataudella | Sep 2002 | B1 |
6584465 | Zhu | Jun 2003 | B1 |
6756997 | Ward, III | Jun 2004 | B1 |
7035903 | Baldonado | Apr 2006 | B1 |
7137074 | Newton | Nov 2006 | B1 |
7230629 | Reynolds | Jun 2007 | B2 |
8793620 | Stafford | Jul 2014 | B2 |
8803873 | Yoo | Aug 2014 | B2 |
8947323 | Raffle | Feb 2015 | B1 |
8970478 | Johansson | Mar 2015 | B2 |
8970629 | Kim | Mar 2015 | B2 |
8994718 | Latta | Mar 2015 | B2 |
9007301 | Raffle et al. | Apr 2015 | B1 |
9185062 | Yang | Nov 2015 | B1 |
9201500 | Srinivasan et al. | Dec 2015 | B2 |
9256785 | Qvarfordt | Feb 2016 | B2 |
9400559 | Latta | Jul 2016 | B2 |
9448635 | MacDougall | Sep 2016 | B2 |
9465479 | Cho | Oct 2016 | B2 |
9526127 | Taubman et al. | Dec 2016 | B1 |
9563331 | Poulos | Feb 2017 | B2 |
9575559 | Andrysco | Feb 2017 | B2 |
9681112 | Son | Jun 2017 | B2 |
9684372 | Xun | Jun 2017 | B2 |
9734402 | Jang et al. | Aug 2017 | B2 |
9778814 | Ambrus | Oct 2017 | B2 |
9851866 | Goossens | Dec 2017 | B2 |
9886087 | Wald | Feb 2018 | B1 |
9933833 | Tu | Apr 2018 | B2 |
9934614 | Ramsby | Apr 2018 | B2 |
10049460 | Romano | Aug 2018 | B2 |
10203764 | Katz | Feb 2019 | B2 |
10307671 | Barney | Jun 2019 | B2 |
10353532 | Holz | Jul 2019 | B1 |
10394320 | George-Svahn et al. | Aug 2019 | B2 |
10534439 | Raffa | Jan 2020 | B2 |
10664048 | Cieplinski | May 2020 | B2 |
10664050 | Alcaide et al. | May 2020 | B2 |
10699488 | Terrano | Jun 2020 | B1 |
10732721 | Clements | Aug 2020 | B1 |
10754434 | Hall | Aug 2020 | B2 |
10768693 | Powderly | Sep 2020 | B2 |
10861242 | Lacey et al. | Dec 2020 | B2 |
10890967 | Stellmach | Jan 2021 | B2 |
10956724 | Terrano | Mar 2021 | B1 |
10983663 | Iglesias | Apr 2021 | B2 |
11055920 | Bramwell | Jul 2021 | B1 |
11112875 | Zhou et al. | Sep 2021 | B1 |
11199898 | Blume et al. | Dec 2021 | B2 |
11200742 | Post | Dec 2021 | B1 |
11262840 | Bychkov et al. | Mar 2022 | B2 |
11294472 | Tang et al. | Apr 2022 | B2 |
11294475 | Pinchon | Apr 2022 | B1 |
11340756 | Faulkner et al. | May 2022 | B2 |
11348300 | Zimmermann | May 2022 | B2 |
11461973 | Pinchon | Oct 2022 | B2 |
11573363 | Zou et al. | Feb 2023 | B2 |
11573631 | Cederlund et al. | Feb 2023 | B2 |
11574452 | Berliner et al. | Feb 2023 | B2 |
1173824 | Iskandar et al. | Aug 2023 | A1 |
11720171 | Pastrana Vicente et al. | Aug 2023 | B2 |
11726577 | Katz | Aug 2023 | B2 |
11733824 | Iskandar | Aug 2023 | B2 |
20010047250 | Schuller | Nov 2001 | A1 |
20020044152 | Abbott | Apr 2002 | A1 |
20030151611 | Turpin | Aug 2003 | A1 |
20030222924 | Baron | Dec 2003 | A1 |
20040059784 | Caughey | Mar 2004 | A1 |
20040243926 | Trenbeath | Dec 2004 | A1 |
20050100210 | Rice | May 2005 | A1 |
20050138572 | Good | Jun 2005 | A1 |
20050144570 | Loverin | Jun 2005 | A1 |
20050144571 | Loverin | Jun 2005 | A1 |
20050198143 | Moody | Sep 2005 | A1 |
20060080702 | Diez | Apr 2006 | A1 |
20060283214 | Donadon | Dec 2006 | A1 |
20080211771 | Richardson | Sep 2008 | A1 |
20090064035 | Shibata | Mar 2009 | A1 |
20090231356 | Barnes | Sep 2009 | A1 |
20100097375 | Tadaishi | Apr 2010 | A1 |
20100188503 | Tsai | Jul 2010 | A1 |
20110018895 | Buzyn | Jan 2011 | A1 |
20110018896 | Buzyn | Jan 2011 | A1 |
20110216060 | Weising | Sep 2011 | A1 |
20110254865 | Yee | Oct 2011 | A1 |
20110310001 | Madau | Dec 2011 | A1 |
20120113223 | Hilliges | May 2012 | A1 |
20120170840 | Caruso | Jul 2012 | A1 |
20120218395 | Andersen | Aug 2012 | A1 |
20120257035 | Larsen | Oct 2012 | A1 |
20130127850 | Bindon | May 2013 | A1 |
20130154913 | Genc et al. | Jun 2013 | A1 |
20130211843 | Clarkson | Aug 2013 | A1 |
20130229345 | Day | Sep 2013 | A1 |
20130271397 | MacDougall | Oct 2013 | A1 |
20130278501 | Bulzacki | Oct 2013 | A1 |
20130283213 | Guendelman et al. | Oct 2013 | A1 |
20130286004 | McCulloch | Oct 2013 | A1 |
20130307775 | Raynor | Nov 2013 | A1 |
20130326364 | Latta | Dec 2013 | A1 |
20130342564 | Kinnebrew | Dec 2013 | A1 |
20130342570 | Kinnebrew | Dec 2013 | A1 |
20140002338 | Raffa | Jan 2014 | A1 |
20140028548 | Bychkov | Jan 2014 | A1 |
20140075361 | Reynolds | Mar 2014 | A1 |
20140108942 | Freeman | Apr 2014 | A1 |
20140125584 | Xun | May 2014 | A1 |
20140184644 | Sharma | Jul 2014 | A1 |
20140198017 | Lamb | Jul 2014 | A1 |
20140258942 | Kutliroff et al. | Sep 2014 | A1 |
20140282272 | Kies | Sep 2014 | A1 |
20140320404 | Kasahara | Oct 2014 | A1 |
20140347391 | Keane | Nov 2014 | A1 |
20150035822 | Arsan | Feb 2015 | A1 |
20150035832 | Sugden | Feb 2015 | A1 |
20150067580 | Um | Mar 2015 | A1 |
20150123890 | Kapur | May 2015 | A1 |
20150153833 | Pinault et al. | Jun 2015 | A1 |
20150177937 | Poletto | Jun 2015 | A1 |
20150187093 | Chu | Jul 2015 | A1 |
20150205106 | Norden | Jul 2015 | A1 |
20150220152 | Tait et al. | Aug 2015 | A1 |
20150242095 | Sonnenberg | Aug 2015 | A1 |
20150317832 | Ebstyne | Nov 2015 | A1 |
20150331576 | Piya | Nov 2015 | A1 |
20150332091 | Kim | Nov 2015 | A1 |
20150370323 | Cieplinski | Dec 2015 | A1 |
20160015470 | Border | Jan 2016 | A1 |
20160018898 | Tu | Jan 2016 | A1 |
20160018900 | Tu | Jan 2016 | A1 |
20160026242 | Burns et al. | Jan 2016 | A1 |
20160026253 | Bradski | Jan 2016 | A1 |
20160062636 | Jung | Mar 2016 | A1 |
20160093108 | Mao | Mar 2016 | A1 |
20160098094 | Minkkinen | Apr 2016 | A1 |
20160171304 | Golding | Jun 2016 | A1 |
20160196692 | Kjallstrom | Jul 2016 | A1 |
20160239080 | Marcolina et al. | Aug 2016 | A1 |
20160253063 | Critchlow | Sep 2016 | A1 |
20160253821 | Romano | Sep 2016 | A1 |
20160275702 | Reynolds | Sep 2016 | A1 |
20160306434 | Ferrin | Oct 2016 | A1 |
20160313890 | Walline | Oct 2016 | A1 |
20160379409 | Gavriliuc | Dec 2016 | A1 |
20170038829 | Lanier | Feb 2017 | A1 |
20170038837 | Faaborg | Feb 2017 | A1 |
20170038849 | Hwang | Feb 2017 | A1 |
20170039770 | Lanier | Feb 2017 | A1 |
20170060230 | Faaborg | Mar 2017 | A1 |
20170123487 | Hazra | May 2017 | A1 |
20170131964 | Baek | May 2017 | A1 |
20170132694 | Damy | May 2017 | A1 |
20170132822 | Marschke | May 2017 | A1 |
20170153866 | Grinberg | Jun 2017 | A1 |
20170206691 | Harrises et al. | Jul 2017 | A1 |
20170228130 | Palmaro | Aug 2017 | A1 |
20170285737 | Khalid | Oct 2017 | A1 |
20170315715 | Fujita | Nov 2017 | A1 |
20170344223 | Holzer | Nov 2017 | A1 |
20170358141 | Stafford | Dec 2017 | A1 |
20180045963 | Hoover | Feb 2018 | A1 |
20180059928 | Westerman et al. | Mar 2018 | A1 |
20180075658 | Lanier | Mar 2018 | A1 |
20180081519 | Kim | Mar 2018 | A1 |
20180095634 | Alexander | Apr 2018 | A1 |
20180095635 | Valdivia | Apr 2018 | A1 |
20180101223 | Ishihara | Apr 2018 | A1 |
20180114364 | McPhee | Apr 2018 | A1 |
20180150997 | Austin | May 2018 | A1 |
20180158222 | Hayashi | Jun 2018 | A1 |
20180181199 | Harvey | Jun 2018 | A1 |
20180188802 | Okumura | Jul 2018 | A1 |
20180210628 | McPhee | Jul 2018 | A1 |
20180239144 | Woods | Aug 2018 | A1 |
20180300023 | Hein | Oct 2018 | A1 |
20180315248 | Bastov | Nov 2018 | A1 |
20180322701 | Pahud | Nov 2018 | A1 |
20180348861 | Uscinski | Dec 2018 | A1 |
20190034076 | Vinayak | Jan 2019 | A1 |
20190050062 | Chen | Feb 2019 | A1 |
20190080572 | Kim | Mar 2019 | A1 |
20190088149 | Fink | Mar 2019 | A1 |
20190094979 | Hall | Mar 2019 | A1 |
20190101991 | Brennan | Apr 2019 | A1 |
20190130633 | Haddad | May 2019 | A1 |
20190146128 | Cao | May 2019 | A1 |
20190204906 | Ross | Jul 2019 | A1 |
20190227763 | Kaufthal | Jul 2019 | A1 |
20190258365 | Zurmoehle | Aug 2019 | A1 |
20190310757 | Lee | Oct 2019 | A1 |
20190324529 | Stellmach et al. | Oct 2019 | A1 |
20190339770 | Kurlethimar | Nov 2019 | A1 |
20190362557 | Lacey | Nov 2019 | A1 |
20190371072 | Lindberg | Dec 2019 | A1 |
20190377487 | Bailey | Dec 2019 | A1 |
20190379765 | Fajt | Dec 2019 | A1 |
20190384406 | Smith | Dec 2019 | A1 |
20200004401 | Hwang | Jan 2020 | A1 |
20200043243 | Bhushan | Feb 2020 | A1 |
20200082602 | Jones | Mar 2020 | A1 |
20200089314 | Poupyrev | Mar 2020 | A1 |
20200098140 | Jagnow | Mar 2020 | A1 |
20200098173 | McCall | Mar 2020 | A1 |
20200117213 | Tian | Apr 2020 | A1 |
20200159017 | Lin | May 2020 | A1 |
20200225747 | Bar-Zeev | Jul 2020 | A1 |
20200225830 | Tang et al. | Jul 2020 | A1 |
20200226814 | Tang | Jul 2020 | A1 |
20200356221 | Behzadi | Nov 2020 | A1 |
20200357374 | Verweij | Nov 2020 | A1 |
20200387228 | Ravasz | Dec 2020 | A1 |
20210064142 | Stern et al. | Mar 2021 | A1 |
20210074062 | Madonna | Mar 2021 | A1 |
20210090337 | Ravasz | Mar 2021 | A1 |
20210096726 | Faulkner | Apr 2021 | A1 |
20210141525 | Piemonte et al. | May 2021 | A1 |
20210191600 | Lemay et al. | Jun 2021 | A1 |
20210295602 | Scapel | Sep 2021 | A1 |
20210303074 | Vanblon | Sep 2021 | A1 |
20210319617 | Ahn | Oct 2021 | A1 |
20210327140 | Rothkopf | Oct 2021 | A1 |
20210339134 | Knoppert et al. | Nov 2021 | A1 |
20210350564 | Peuhkurinen | Nov 2021 | A1 |
20210357073 | Kandadai et al. | Nov 2021 | A1 |
20210375022 | Lee | Dec 2021 | A1 |
20220011855 | Hazra | Jan 2022 | A1 |
20220030197 | Ishimoto | Jan 2022 | A1 |
20220083197 | Rockel | Mar 2022 | A1 |
20220092862 | Faulkner | Mar 2022 | A1 |
20220101593 | Rockel | Mar 2022 | A1 |
20220101612 | Palangie | Mar 2022 | A1 |
20220104910 | Shelton, IV et al. | Apr 2022 | A1 |
20220121344 | Pastrana Vicente et al. | Apr 2022 | A1 |
20220137705 | Hashimoto | May 2022 | A1 |
20220157083 | Jandhyala | May 2022 | A1 |
20220187907 | Lee et al. | Jun 2022 | A1 |
20220191570 | Reid | Jun 2022 | A1 |
20220229524 | McKenzie et al. | Jul 2022 | A1 |
20220229534 | Terre | Jul 2022 | A1 |
20220236795 | Jonker | Jul 2022 | A1 |
20220245888 | Singh | Aug 2022 | A1 |
20220253149 | Berliner | Aug 2022 | A1 |
20220276720 | Yasui | Sep 2022 | A1 |
20220326837 | Dessero | Oct 2022 | A1 |
20220413691 | Becker | Dec 2022 | A1 |
20230004216 | Rodgers et al. | Jan 2023 | A1 |
20230008537 | Henderson | Jan 2023 | A1 |
20230068660 | Brent | Mar 2023 | A1 |
20230069764 | Jonker | Mar 2023 | A1 |
20230074080 | Miller | Mar 2023 | A1 |
20230093979 | Stauber | Mar 2023 | A1 |
20230133579 | Chang | May 2023 | A1 |
20230152935 | McKenzie | May 2023 | A1 |
20230154122 | Dascola | May 2023 | A1 |
20230163987 | Young | May 2023 | A1 |
20230168788 | Faulkner et al. | Jun 2023 | A1 |
20230185426 | Rockel et al. | Jun 2023 | A1 |
20230186577 | Rockel et al. | Jun 2023 | A1 |
20230244316 | Schwarz | Aug 2023 | A1 |
20230244857 | Weiss | Aug 2023 | A1 |
20230273706 | Smith | Aug 2023 | A1 |
20230274504 | Ren | Aug 2023 | A1 |
20230315385 | Akmal | Oct 2023 | A1 |
20230316634 | Shih-Sang | Oct 2023 | A1 |
20230325004 | Burns | Oct 2023 | A1 |
20230350539 | Owen | Nov 2023 | A1 |
20230384907 | Boesel | Nov 2023 | A1 |
20240086031 | Palangie | Mar 2024 | A1 |
20240086032 | Palangie | Mar 2024 | A1 |
20240087256 | Hylak | Mar 2024 | A1 |
20240094863 | Smith | Mar 2024 | A1 |
20240095984 | Ren | Mar 2024 | A1 |
20240103684 | Yu | Mar 2024 | A1 |
20240103707 | Henderson | Mar 2024 | A1 |
20240104836 | Dessero | Mar 2024 | A1 |
20240104877 | Henderson | Mar 2024 | A1 |
20240111479 | Paul | Apr 2024 | A1 |
Number | Date | Country |
---|---|---|
3033344 | Feb 2018 | CA |
104714771 | Jun 2015 | CN |
105264461 | Jan 2016 | CN |
105264478 | Jan 2016 | CN |
108633307 | Oct 2018 | CN |
110476142 | Nov 2019 | CN |
110673718 | Jan 2020 | CN |
2741175 | Jun 2014 | EP |
2947545 | Nov 2015 | EP |
3503101 | Jun 2019 | EP |
3588255 | Jan 2020 | EP |
3654147 | May 2020 | EP |
H1051711 | Feb 1998 | JP |
2005215144 | Aug 2005 | JP |
2012234550 | Nov 2012 | JP |
2013196158 | Sep 2013 | JP |
2013257716 | Dec 2013 | JP |
2014514652 | Jun 2014 | JP |
2015515040 | May 2015 | JP |
2015118332 | Jun 2015 | JP |
2016194744 | Nov 2016 | JP |
2017027206 | Feb 2017 | JP |
2018005516 | Jan 2018 | JP |
2019169154 | Oct 2019 | JP |
2022053334 | Apr 2022 | JP |
20160012139 | Feb 2016 | KR |
20190100957 | Aug 2019 | KR |
2019142560 | Jul 2019 | WO |
2020066682 | Apr 2020 | WO |
2021202783 | Oct 2021 | WO |
2022055822 | Mar 2022 | WO |
2022066399 | Mar 2022 | WO |
WO 2022046340 | Mar 2022 | WO |
WO2022164881 | Apr 2022 | WO |
2023141535 | Jul 2023 | WO |
Entry |
---|
U.S. Patent and Trademark Office, Non-Final Office Action, U.S. Appl. No. 18/244,570, 33 pages, May 29, 2024. |
U.S. Patent and Trademark Office, Corrected Notice of Allowability received for U.S. Appl. No. 17/448,875, 4 pages, Apr. 24, 2024. |
U.S. Patent and Trademark Office, Corrected Notice of Allowability received for U.S. Appl. No. 17/659,147, 6 pages, Feb. 14, 2024. |
U.S. Patent and Trademark Office, Corrected Notice of Allowability received for U.S. Appl. No. 17/932,655, 2 pages, Oct. 12, 2023. |
U.S. Patent and Trademark Office, Corrected Notice of Allowability received for U.S. Appl. No. 18/465,098, 3 pages, Mar. 13, 2024. |
European Patent Office, European Search Report received for European Patent Application No. 21791153.6, 5 pages, Mar. 22, 2024. |
European Patent Office, Extended European Search Report received for European Patent Application No. 23197572.3, 7 pages, Feb. 19, 2024. |
U.S. Patent and Trademark Office, Final Office Action received for U.S. Appl. No. 17/580,495, 29 pages, May 13, 2024. |
U.S. Patent and Trademark Office, Final Office Action received for U.S. Appl. No. 17/659,147, 17 pages, Oct. 4, 2023. |
European Patent Office (ISA/EP), International Search Report received for International Patent Application No. PCT/US2021/071596, 7 pages, Apr. 8, 2022. |
European Patent Office (ISA/EP), International Search Report received for International Patent Application No. PCT/US2022/071704, 6 pages, Aug. 26, 2022. |
European Patent Office (ISA/EP), International Search Report received for PCT Patent Application No. PCT/US2023/074257, 5 pages, Nov. 21, 2023. |
European Patent Office (ISA/EP), International Search Report received for International Patent Application No. PCT/US2023/074950, 9 pages, Jan. 3, 2024. |
European Patent Office (ISA/EP), International Search Report received for International Patent Application No. PCT/US2023/074979, 6 pages, Feb. 26, 2024. |
U.S. Patent and Trademark Office, Non-Final Office Action received for U.S. Appl. No. 17/659,147, 19 pages, Mar. 16, 2023. |
U.S. Patent and Trademark Office, Non-Final Office Action received for U.S. Appl. No. 17/932,655, 10 pages, Apr. 20, 2023. |
U.S. Patent and Trademark Office, Non-Final Office Action received for U.S. Appl. No. 17/932,999, 22 pages, Feb. 23, 2024. |
U.S. Patent and Trademark Office, Non-Final Office Action received for U.S. Appl. No. 18/157,040, 25 pages, May 2, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/448,875, 8 pages, Apr. 17, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/448,876, 9 pages, Apr. 7, 2022. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/448,876, 8 pages, Jul. 20, 2022. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/659,147, 13 pages, Jan. 26, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/932,655, 7 pages, Jan. 24, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 17/932,655, 7 pages, Sep. 29, 2023. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/154,757, 12 pages, May 10, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/421,675, 9 pages, Apr. 11, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/463,739, 10 pages, Feb. 1, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/463,739, 11 pages, Oct. 30, 2023. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/465,098, 6 pages, Mar. 4, 2024. |
U.S. Patent and Trademark Office, Notice of Allowance received for U.S. Appl. No. 18/465,098, 8 pages, Nov. 17, 2023. |
National Intellectual Property Administration of the People's Republic of China, Search Report (with English translation) received for Chinese Patent Application No. 202310873465.7, 5 pages, Feb. 1, 2024. |
“AquaSnap Window Manager: dock, snap, tile, organize,” Nurgo Software, Available online at: https://www.nurgo-software.com/products/aquasnap, 5 pages, retrieved on Jun. 27, 2023. |
United States Patent and Trademark Office, Corrected Notice of Allowance, U.S. Appl. No. 17/478,593, 2 pages, Dec. 21, 2022. |
European Patent Office, Extended European Search Report, European Patent Application No. 23158818.7, 12 pages, Jul. 3, 2023. |
European Patent Office, Extended European Search Report, European Patent Application No. 23158929.2, 12 pages, Jun. 27, 2023. |
U.S. Patent and Trademark Office, Final Office Action, U.S. Appl. No. 17/448,875, 24 pages, Mar. 16, 2023. |
Home | Virtual Desktop, Virtual Desktop, https://www.vrdesktop.net, 4 pages, retrieved Jun. 29, 2023. |
European Patent Office (ISA/EP), International Search Report, International Application No. PCT/US2022/076603, 4 pages, Jan. 9, 2023. |
European Patent Office (ISA/EP), International Search Report, International Application No. PCT/US2023/017335, 6 pages, Aug. 22, 2023. |
European Patent Office (ISA/EP), International Search Report, International Application No. Application No. PCT/US2023/018213, 6 pages, 2023-2079-26. |
European Patent Office (ISA/EP), International Search Report, International Application No. PCT/US2023/019458, 7 pages, Aug. 8, 2023. |
European Patent Office (ISA/EP), International Search Report, International Application No. PCT/US2023/060943, 7 pages, Jun. 6, 2023. |
European Patent Office (ISA/EP), International Search Report, International Application No. PCT/US2021/050948, 6 pages, Mar. 4, 2022. |
European Patent Office. International Search Report, International Application No. PCT/US2021/065240, 6 pages, May 23, 2022. |
European Patent Office, International Search Report, International Application No. PCT/US2021/071595, 7 pages, Mar. 17, 2022. |
European Patent Office, International Search Report, International Application No. PCT/US2022/013208, 7 pages, Apr. 26, 2022. |
European Patent Office, International Search Report, International Application No. PCT/US2021/065242, 3 pages, Apr. 4, 2022. |
U.S. Patent and Trademark Office, Non-Final Office Action, U.S. Appl. No. 17/448,875, 25 pages, Oct. 6, 2022. |
U.S. Patent and Trademark Office, Non-Final Office Action, U.S. Appl. No. 17/448,875, 30 pages, Sep. 29, 2023. |
U.S. Patent and Trademark Office, Non-Final Office Action, U.S. Appl. No. 17/580,495, 27 pages, Dec. 11, 2023. |
U.S. Patent and Trademark Office, Notice of Allowance, U.S. Appl. No. 17/478,593,, 10 pages, Aug. 31, 2022. |
U.S. Patent and Trademark Office, Notice of Allowance, U.S. Appl. No. 17/580,495,, 6 pages, Jun. 6, 2023. |
U.S. Patent and Trademark Office, Notice of Allowance, U.S. Appl. No. 17/580,495, 12 pages, Nov. 30, 2022. |
U.S. Patent and Trademark Office, Notice of Allowance, U.S. Appl. No. 18/154,757, 10 pages, Jan. 23, 2024. |
U.S. Patent and Trademark Office, Restriction Requirement, U.S. Appl. No. 17/932,999, 6 pages, Oct. 3, 2023. |
Bhowmick, S, “Explorations on Body-Gesture Based Object Selection on HMD Based VR Interfaces for Dense and Occluded Dense Virtual Environments,” Report: State of the Art Seminar, Department of Design Indian Institute of Technology, Guwahati, 25 pages, Nov. 2018. |
Bolt, R.A et al.,, “Two-Handed Gesture in Multi-Modal Natural Dialog,” Uist '92, 5th Annual Symposium on User Interface Software And Technology. Proceedings Of the ACM Symposium on User Interface Software And Technology, Monterey, pp. 7-14, Nov. 1992. |
Brennan, D., “4 Virtual Reality Desktops for Vive, Rift, and Windows VR Compared,” Road to VR, Available online at: https://www.roadtovr.com/virtual-reality-desktop-compared-oculus-rift-htc-vive/, 4 pages, retrieved on Jun. 29, 2023. |
Camalich, S., “CSS Buttons with Pseudo-elements,” available online at: https://tympanus.net/codrops/2012/01/11/css-buttons-with-pseudo-elements/, 8 pages, retrieved on Jul. 12, 2017. |
Chatterjee, I. et al., “Gaze+Gesture: Expressive, Precise and Targeted Free-Space Interactions,” ICMI '15, 8 pages, 2015. |
Lin, J. et al., “Towards Naturally Grabbing and Moving Objects in VR,” IS&T International Symposium on Electronic Imaging and The Engineering Reality of Virtual Reality, 6 pages, 2016. |
McGill, M et al., “Expanding The Bounds Of Seated Virtual Workspaces,” University of Glasgow, Available online at: https://core.ac.uk/download/pdf/323988271.pdf, 44 pages, retrieved on Jun. 27, 2023. |
Pfeuffer, K. et al., “Gaze + Pinch Interaction in Virtual Reality,” In Proceedings of SUI '17, Brighton, United Kingdom, pp. 99-108, 2017. |
European Patent Office, International Search Report issued Dec. 21, 2021, which pertains to PCT Application No. PCT/US2021/049131, filed Sep. 3, 2021. 4 pgs. |
European Patent Office, Written Opinion issued Dec. 21, 2021, which pertains to PCT Application No. PCT/US2021/049131, filed Sep. 3, 2021. 9 pgs. |
Number | Date | Country | |
---|---|---|---|
20240103613 A1 | Mar 2024 | US |
Number | Date | Country | |
---|---|---|---|
63409147 | Sep 2022 | US | |
63453506 | Mar 2023 | US |