The present disclosure generally relates to assessing user interactions with electronic devices that involve hand-based input, e.g., a user moving a hand while pinching or other making another hand gesture to reposition a user interface element within a three-dimensional environment such as a three-dimensional (3D) extended reality (XR) environment.
Existing user interaction systems may be improved with respect to facilitating interactions based on user activities.
Various implementations disclosed herein include devices, systems, and methods that reposition user interface elements based on user activities. Some implementations determine how to move a user interface element based on interpreting hand-based input relative to a reference point, e.g., a reference point on the user's body. For example, the position of a user interface element such as a window may be controlled based on (a) a ray direction from a reference point located at a user's shoulder joint through a hand position located at the user's pinch position and/or (b) a distance between the shoulder point and the pinch position. A pinch position may be a pinch centroid that is determined to be an intermediate point at which a pinch is considered to occur, e.g., based on index fingertip and thumb-tip positions. A reference point such as a shoulder joint may have a location that is not continuously observed by sensors on the device, e.g., a user's torso may be only occasionally captured by sensors on a head-mounted device (HMD). The location of such a reference point may thus be determined based on inferring the position and/or orientation of a portion of the user's body (e.g., the user's torso) from a position and/or orientation of another portion of the user's body (e.g., the user's head) with which more continuous observation data is available. Such inference may, in some circumstances, result in inaccurate reference positions (e.g., a determined shoulder joint position and/or orientation that does not correspond to the position and/or orientation of the user's shoulder). Such inaccuracy may result in undesirable responses to user activity that is based on the inferred reference point. In one example, inaccurate reference point positioning results in accuracy during a user interaction in which the user is moving their hand to control the movement of a user interface element. Inaccuracy may result, for example, based on head pose where head rotation is interpreted as, but does not actually correspond to, reference point movement. Such inaccurate reference point movement may result in unintended movement of the user interface element. Some implementations stabilize a UI element using a stabilization property that is determined based on assessing the user's movement, e.g., assessing a hand movement characteristic and a head movement characteristic. In one example, the stabilization property restricts movement of user interface elements in certain circumstances, e.g., based on head movement (e.g., angular speed), hand movement (e.g., pinch centroid angular speed), and/or hand translation (e.g., pinch centroid speed). A stabilization property, for example, may be used to restrict and/or minimize user interface movement that is based on a reference point in circumstances in which an inferred reference position is more likely to be inaccurate.
In some implementations, a processor performs a method by executing instructions stored on a computer readable medium of an electronic device having one or more sensors. The method involves tracking a hand movement characteristic corresponding to a hand. The hand may be tracked based on first sensor data obtained via the one or more sensors. Tracking the hand characteristic may involve tracking movement, angular speed, or other attribute of a pinch centroid, i.e., a position associated with user gesture involving pinching together two or more fingers. In one example, a position of a pinch (e.g., pinch centroid) is determined based on images captured by the one or more sensors, e.g., via outward facing sensors of a head mounted device (HMD).
The method may further involve determining a head movement characteristic corresponding to a head. The head may be tracked based on second sensor data obtained via the one or more sensors. A reference position (e.g., shoulder joint, chest point, elbow joint, wrist joint, etc.) may be determined based on the head position (e.g., a tracked head pose). Determining the head movement characteristic may involve data from the one or more sensors. For example, this may involve using motion sensors and/or other sensors on an HMD worn on a head of a user to track head pose (e.g., 6DOF position and orientation) and using that head pose to infer probably shoulder positions, e.g., based on an assumed fixed relationship between head pose and shoulder position. In some implementations, the head movement characteristic may include a measure of motion of the head such as angular speed, etc.
The method may further involve determining a stabilization property based on the hand movement characteristic and the head movement characteristic. In one example, the stabilization property is determined based on detecting head rotation that is mostly independent of and/or significantly different than hand motion, i.e., a circumstance in which a reference position inferred based on head position may be inaccurate. The stabilization property may fix the position of a user interface element that is being controlled by the hand motion (e.g., allowing no movement of the user interface element when a threshold head-to-hand motion differential is exceeded). The stabilization may dampen movement of the user interface element based on the relative amount that head motion exceeds hand motion.
The method may further involve repositioning a user interface element displayed by the device based on a three-dimensional (3D) position of the hand (e.g., pinch centroid location), a 3D position of the reference position (e.g., shoulder location), and the stabilization property. This may involve fixing or dampening movement of the user interface element in certain circumstances. Such fixing or dampening may prevent unintended movement of the user interface element, for example, in circumstances in which a user shakes their head from side-to-side. Such side-to-side head motion may otherwise (without the fixing or dampening) cause movement of the UI element by causing inaccurate movement of a reference position associated with the head's pose that does not correspond to the position of the portion of the user's body associated with the reference position, e.g., an inferred shoulder position may differ from the actual shoulder position as the user shakes their head side-to-side. Implementations disclosed herein stabilize user interface elements in these and other circumstances in which certain user activity is not intended to result in user interface element motion and/or inferred reference point position may be inaccurate.
In accordance with some implementations, a device includes one or more processors, a non-transitory memory, and one or more programs; the one or more programs are stored in the non-transitory memory and configured to be executed by the one or more processors and the one or more programs include instructions for performing or causing performance of any of the methods described herein. In accordance with some implementations, a non-transitory computer readable storage medium has stored therein instructions, which, when executed by one or more processors of a device, cause the device to perform or cause performance of any of the methods described herein. In accordance with some implementations, a device includes: one or more processors, a non-transitory memory, and means for performing or causing performance of any of the methods described herein.
So that the present disclosure can be understood by those of ordinary skill in the art, a more detailed description may be had by reference to aspects of some illustrative implementations, some of which are shown in the accompanying drawings.
In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Numerous details are described in order to provide a thorough understanding of the example implementations shown in the drawings. However, the drawings merely show some example aspects of the present disclosure and are therefore not to be considered limiting. Those of ordinary skill in the art will appreciate that other effective aspects and/or variants do not include all of the specific details described herein. Moreover, well-known systems, methods, components, devices and circuits have not been described in exhaustive detail so as not to obscure more pertinent aspects of the example implementations described herein.
In some implementations, views of an extended reality (XR) environment may be provided to one or more participants (e.g., user 102 and/or other participants not shown) via electronic device 105 (e.g., a wearable device such as an HMD, a handheld device such as a mobile device, a tablet computing device, a laptop computer, etc.). Such an XR environment may include views of a 3D environment that are generated based on camera images and/or depth camera images of the physical environment 100, as well as a representation of user 102 based on camera images and/or depth camera images of the user 102. Such an XR environment may include virtual content that is positioned at 3D locations relative to a 3D coordinate system (i.e., a 3D space) associated with the XR environment, which may correspond to a 3D coordinate system of the physical environment 100.
In some implementations, video (e.g., pass-through video depicting a physical environment) is received from an image sensor of a device (e.g., device 105). In some implementations, a 3D representation of a virtual environment is aligned with a 3D coordinate system of the physical environment. A sizing of the 3D representation of the virtual environment may be generated based on, inter alia, a scale of the physical environment or a positioning of an open space, floor, wall, etc. such that the 3D representation is configured to align with corresponding features of the physical environment. In some implementations, a viewpoint within the 3D coordinate system may be determined based on a position of the electronic device within the physical environment. The viewpoint may be determined based on, inter alia, image data, depth sensor data, motion sensor data, etc., which may be retrieved via a virtual inertial odometry system (VIO), a simultaneous localization and mapping (SLAM) system, etc.
In the examples of
In this example, the background portion 235 of the user interface element 230 is flat. In this example, the background portion 235 includes aspects of the user interface element 230 being displayed except for the feature icons 242, 244, 246, 248. Displaying a background portion of a user interface of an operating system or application as a flat surface may provide various advantages. Doing so may provide an easy to understand or otherwise use portion of an XR environment for accessing the user interface of an application. In some implementations, multiple user interfaces (e.g., corresponding to multiple, different applications) are presented sequentially and/or simultaneously within an XR environment, e.g., within one or more colliders or other such components.
In some implementations, the positions and/or orientations of such one or more user interfaces may be determined to facilitate visibility and/or use. The one or more user interfaces may be at fixed positions and orientations within the 3D environment. In such cases, user movements would not affect the position or orientation of the user interfaces within the 3D environment. In other cases, user movements will affect the position and/or orientation of the user interfaces within the 3D environment. The user 102 may intentionally (or unintentionally) perform movements that result in repositioning of the user interface within the 3D environment.
In some implementations the user interface element 230 has a flat portion (e.g., background portion 235) that is desirably positioned in a generally orthogonal orientation such that it generally faces the user's torso, i.e., is orthogonal to a direction (not shown) from the user to the user interface element 230. For example, providing such an orientation may involve reorienting the user interface element 230 when the user moves and/or when the user provides input changing the positioning of the user interface element 230 such that the flat portion remains generally facing the user.
The initial position of the user interface within the 3D environment (e.g., when an app providing the user interface is first launched) may be based on determining a predetermined distance of the user interface from the user (e.g., from an initial or current user position) and predetermined direction (e.g., directly in front of the user and facing the user). The position and/or distance from the user may be determined based on various criteria including, but not limited to, criteria that accounts for application type, application functionality, content type, content/text size, environment type, environment size, environment complexity, environment lighting, presence of others in the environment, use of the application or content by multiple users, user preferences, user input, and numerous other factors.
Two examples are depicted in
Implementations disclosed herein interpret such user movements based on determining a 3D position of the user's hand relative to one or more reference positions. For example, the reference positions may be determined based on determining (i.e., estimating) a current position of a feature of the user's torso (e.g., a shoulder joint position, chest center position, etc.), other body part (e.g., an elbow position, a wrist position, etc.), or other position that may be based upon, but outside of, the user's body (e.g., a position 5 inches in front of the user's chest).
In the examples of
The user's gesture (i.e., the positioning of their hand while pinching) and movement are interpreted based on the reference positions. At a point in time (e.g., every frame of a sequence of frames used to display the views), the current position of the reference position and the current position of the user's hand are used to determine how to position a corresponding user interface element.
In these examples, the user's gesture is associated with window movement icon 250. Generally, the user initiates an interaction with the window movement icon 250 (e.g., by pinching while gazing at the window movement icon 250). The user's gaze may be tracked and the gaze direction relative to user interface components in the XR environment may be used to identify when the user is gazing at particular user interface elements. After interaction with the window movement icon is initiated, subsequent movement (e.g., movement of the hand while the pinch continues) is interpreted to move the user interface element 230 that is associated with the user interface movement icon 250, i.e., as the user moves their hand left, right, up, down, forward, and backward, corresponding movements are performed for the user interface element 230. Such movements of the user interface element 230 change its position within the 3D XR environment and thus within the 2D views of the XR environment provided to the user. Between views 205a and 205b, the user moves their hand to the right from the position of depiction 222a to the position of depiction 222b. Between views 205a and 205c, the user moves their hand to the away from their body, from the position of depiction 222a to the position of depiction 222c. These changes in hand position are interpreted based on reference positions (i.e., reference positions 216a-c) to reposition the user interface element 230 as explained next.
The reference position(s) (e.g., reference position 216a, 216b, 216c) are used to determined how to interpret the hand's position, i.e., how to reposition the user interface element 230 based on the hand's current position. Specifically, in this example, as the user moves their hand (and/or the reference position), a ray from the reference position through a point associated with the hand (e.g., a pinch centroid) is moved. The ray is used to reposition the user interface element 230.
In a first example, a user movement corresponds to the change from
In a second example, a user movement corresponds to the change from
Note that, in some implementations, a user interface element 230 may be configured to always remain gravity or horizon aligned, such that head, body changes, and/or user window movement control gestures in the roll orientation would not cause the UI to move within the 3D environment.
Note that the user's movement in the real world (e.g., physical environment 100) correspond to movements within a 3D space, e.g., an XR environment that is based on the real-world and that includes virtual content such as user interface positioned relative to real-world objects including the user. Thus, the user is moving their hand in the physical environment 100, e.g., through empty space, but that hand (i.e., a depiction or representation of the hand) may have a 3D position that is associated with a 3D position within an XR environment that is based on that physical environment. In this way, the user may virtually interact in 3D with the virtual content.
In this example of
In the example of
The reference point is used to interpret user interaction, i.e., specify current user interface element location. In
In this example of
In the example of
In this example of
Thus, using the rays 515a and 515b to determine a repositioning of the user interface element would (and does) result in an erroneous positioning, as illustrated. In
Some implementations disclosed herein enable accurate responses to user activity such as those illustrated in
Similar to the example of
In this example, the device 605 detects certain head and hand movement characteristics, and based on those characteristics satisfying one or more criteria, determines to fix the position of the user interface element 630 during that time. Thus, during the user motion from the time of
A pinch gesture is detected and used to control movement, e.g., while the user is pinching, any movement of the user's hand causes a corresponding movement of the window movement icon 750 and user interface element 730. In this example, while pinching, the user performs head motion, e.g., turning their head side to side in direction 740, while not moving their torso or hand, i.e., the hand is steady and does not move in direction 742 and the torso 780 is steady and does not move in direction 744. Without mitigation, such head-only motion may result in unintended movement of the user interface element 730 from side to side, e.g., based on the head movement causing as reference position change, as described above with respect to
Other factors may be considered in determining unintended movement mitigations so that such mitigation do not interfere with the desired responses of user interface elements such as user interface element 730. In one situation, a user's whole body rotates around a UI element (e.g., window) while the user is pinching at a grab bar. In this case, the desired behavior may be for the UI element (e.g., window) to rotate in place always facing the user. In another situation, a user's whole body rotates around themself while the user is pinching at a grab bar. In this case, the desired behavior may be for the window to rotate around the user facing him all the time. In another situation, a user walks/turns around corners or otherwise makes significant directional changes while walking and while holding a grab bar. In this case, the desired behavior may be for the window to travel with the user, facing them. In some implementations, the mitigations applied to account for unintended movements should not degrade or otherwise interfere with such responses.
Some implementations mitigate or otherwise avoid unintended user interface movements by identifying circumstances in which it is appropriate to fix the position (and/or orientation) of a user interface element or dampen its repositioning response to certain user activity. Since sensor data may be limited or unavailable from which torso 780 position and/or movement can be detected, some implementations determine when such circumstances exist based on hand movement characteristics and/or head movement characteristics, e.g., by assessing a difference between relative head and hand movement. In one example, this may involve determining a median head angular speed relative to pinch centroid as described with respect to
In some implementations, user interface element (e.g., window) placement is performed as a function of the distance between a user's shoulder and a pinch centroid. The shoulder position may be estimated based on head pose, which can swing with head rotation, and head rotation may also cause detection of false hand motion/swings. Such false hand motion/swings may be amplified because relatively smaller hand motions may be interpreted using angular criteria such that such motions cause relatively larger movements of user interface elements that are positionally further away, e.g., from an angular reference point Thus, small detected false hand motions may result in relatively large movements of the user interface element. Such responses can provide false window “swimming” in which a user interface element appears to move around as the head moves, even when a user is holding their hand steady or otherwise intending or expecting the element to remain stable at its location in the 3D environment as the head moves. Some implementations enable user interface repositioning techniques that mitigate unintended or unexpected window swim or movement (e.g., based on head motion) while preserving intended placement behaviors (e.g., behaviors illustrated in
Some implementations, fix a user interface element (e.g., window) location when a head rotation is detected. Doing so may provide stability during head swings but may produce stutter behavior at transitions between fixed and non-fixed period.
Some implementations provide stabilization properties (e.g., fixing, dampening, or enabling user interface element movement) based on distinguishing circumstances in which a user's head movement is associated with a user intention to have a user interface element move (e.g., as the user walks around or rotates their entire body) from circumstances in which the user's head movement is not associated with an intention to have a user interface element move (e.g., when the user is shaking or twisting their head in a way that is unrelated to gesturing or other activity associated with an intention to have a user interface element move). Some implementations allow normal user interface element movement in circumstances in which a user's head movement is associated with a user intention to have a user interface element move (e.g., as the user walks around or rotates their entire body). Some implementations restrict or dampen user interface element movement in circumstances in which the user's head movement is not associated with an intention to have a user interface element move (e.g., when the user is shaking or twisting their head in a way that is unrelated to gesturing or other activity associated with an intention to have a user interface element move).
Head and hand movement information can be used in various ways to identify and distinguish such circumstances. Some implementations determine if the user's hand movement differs from the user's head movement (e.g., indicative of head movement unrelated to hand movement and not associated with intentional user interface element movement), so that the system may provide fixed user interface placement or dampened repositioning of the user interface element. The user interface element will not be moved (at all or as much) based on head-based movements that do not correspond to a use intention to have the user interface element move. Some implementations determine if both the head and hand are moving together (e.g., indicative of head movement related to hand/body movement that is associated with intentional user interface element movement), so that the system may provide regular (unfixed/undampened) repositioning of the user interface element.
The difference between median head angular speed and median PC angular speed is generally indicative of whether a user's head is rotating relative to (or with) the user's torso. In this example, the median head angular speed and median PC angular speed (negative of the median PC angular speed) are added at block 822 to identify a difference or otherwise produce a comparison output. The output and a 0.0 value are assessed at take max block 824 to identify a median head angular speed relative to PC value 826. In other words, the “take max” block 824 is taking the max of the difference (from block 822) or 0.0, meaning that if the difference is negative, then 0 is chosen. This value (and/or other values in block 850) may be compared to a threshold to determine whether a user's head is rotating relative to (or with) the user's hand/torso and/or, accordingly, whether to fix or dampen the repositioning of a user interface element.
Restriction or dampening of a user interface element may be performed when the median head angular speed relative to PC 826 is greater than the threshold, e.g., when the circumstances indicate that the head is moving significantly without corresponding to pinch centroid/torso movement. Circumstances in which to fix or dampen the repositioning of a user interface element may be performed using these or other criteria that are indicative of user activity corresponding to an intentional action to move a user interface element (e.g., where head angular speed and PC angular speed are similar—within a similarity threshold) versus an action not intended to move a user interface element (e.g., where head angular speed and PC angular speed are dissimilar—beyond a similarity threshold).
Pinch centroid (PC) pose information (e.g., from device outward facing image sensor-based hand tracking) may additionally (or alternatively) be used to determine when to fix or dampen user interface element position, e.g., pinch centroid translational speed may be used in addition to or an alternative to angular speed. Pinch centroid (PC) pose information may be passed through a translational filter 806 to produce PC speed, which is passed through a median filter 816 (e.g., identifying a median within a specified time window) to produce a median PC speed. For example, this may involve using the last x number (e.g., 2, 3, 5, 10, etc.) of pinch centroid speed values to determine a median speed, e.g., 4.8, 4.9, 5.0, 5.1, and 5.2 values used to determine a median PC speed value −5.0. This value may additionally or alternatively be used to determine whether to fix or dampen the repositioning of a user interface element. For example, in a circumstance in which relative head/PC angular speeds may suggest unintentional movement (i.e., no intention to move a user interface element), the median speed of the PC may indicate otherwise by indicating that the user is in fact doing an intentional interaction. Thus, the user interface element's position may not be fixed or dampening based on detecting user activity characteristics (i.e., median speed of the PC) that are indicative of intentional action to move a user interface element (e.g., a median PC speed above a threshold-hand is moving fast). In various implementations, the system balances indications of intentions based on different head/hand motion characteristics to predict a user's intentions with respect to moving user interface elements and fixes or dampens user interface element positioning accordingly.
As another example, total PC translation may additionally or alternatively be used to determine when to fix or dampen user interface element position, e.g., total PC translational may be used in addition to or an alternative to relative head/PC angular speed and/or PC translational speed. For example, in a circumstance in which relative head/PC angular speeds may suggest unintentional movement (i.e., no intention to move a user interface element), the total PC translation may indicate otherwise by suggesting that the user is involved in a larger scale hand or body movement that is likely associated with an intention to interact. Thus, the user interface element's position may not be fixed or dampening based on detecting user activity characteristics (i.e., total PC translation) that are indicative of intentional action to move a user interface element (e.g., a total PC translation above a threshold-hand/torso has moved significantly over time).
Generally, detection of circumstances in which to fix or dampen the repositioning of a user interface element may be performed by thresholding any of the entities in block 850 or function of these entities and/or using additional or alternative criteria indicative of when user activity corresponds to an intentional action to move a user interface element versus an action not intended to move a user interface element. In other examples, other features, such as total translation, total rotation, acceleration, etc., are used for detection.
In some implementations, different placement states are determined based on detecting various circumstances, e.g., various head and/or hand movement characteristics. In one example, a fixed or dampened placement state may be determined based on a combination of multiple factors, for example, where (a) a median head angular speed relative to PC is greater than a threshold A and (b) a median PC speed is less than a threshold B. Median head angular speed relative to PC being greater than threshold A may be indicative of a user head independently of the user's hand (which may, for example, occur when the user is moving their head independently of the user's hand and/or entire torso), while median PC speed being less than threshold B may be indicative of the user's hand being relatively stationary and/or being less likely to be involved in a gesture. If both circumstances are present, the system may have a significant confidence that the head motion doesn't correspond to intentional UI movement activity and thus determine to fix or dampen movement that would otherwise be triggered.
In contrast, a regular (i.e., non-fixed or non-dampened) placement state may be determined when (a) a median head angular speed relative to PC is less than a threshold C (e.g., potentially indicative of head/hand/torso moving together), or (b) a median PC speed is greater than a threshold D (e.g., hand is moving fast enough to infer that gesturing or intentional interaction is likely occurring), or (c) a total PC translation is greater than a threshold E (e.g., user's total movement of the hand over time is indicative of a large or significant hand or body movement such that inferring intentional gesturing or interaction is appropriate).
In these examples, A, B, C, D, and E are all greater than 0. While threshold A could equal threshold C and threshold B could equal threshold D in the above examples, the system may promote hysteresis by requiring A to be greater than C and D to be greater than B. Other criteria and/or threshold may be used in other implementations.
Various criteria may be used to provide a user experience in which unintended head motion is identified and a user interface element's repositioning fixed or dampened during such motion. However, the criteria may be selected so that the system quickly exits the fixed or dampened state, e.g., based on a minimal amount of hand motion being detected. If both the head and hand are moving together (e.g., as the user walks around or rotates their entire body), the system may provide regular (unfixed) repositioning of the user interface element. On the other hand, when the user is shaking or twisting their head, the repositioning may be paused or dampened.
In some implementations, the criteria are used to determine whether the user is sitting or standing or whether the user is sitting, standing, or walking. User interface element fixation may be applied only in one state (e.g., only providing fixation in the sitting state) or may be applied differently in the different states (e.g., dampening repositioning relatively more in the sitting state versus the standing state). In some implementations, user activity classification is performed to determine when the user is performing head-only rotation or other head only movements and user interface repositioning as restricted or dampened during that state.
In some implementations a stateless algorithm may be desirable, for example, to avoid or reduce stutter at transitions. Since user interface placement may be a function of differential hand motion at each frame, window motion changes (e.g., deltas) may be dampened. A dampening factor can be a function of the entities of block 850 of
Dampening may be applied in other contexts as well, e.g., in addition to or alternatively to UI interface element placement. For example, the user might be holding still on a slider (such as a movie scroll bar) while shaking/rotating/moving their heads. In this case may determine that the slider should not move from the current location and dampen movement accordingly.
At block 1002, the method 1000 involves tracking a hand movement characteristic corresponding to a hand. The hand may be tracked based on first sensor data obtained via the one or more sensors. Tracking the hand characteristic may involve tracking movement, angular speed, or other attribute of a pinch centroid, i.e., a position associated with user gesture involving pinching together two or more fingers. In one example, a position of a pinch (e.g., pinch centroid) is determined based on images captured by the one or more sensors, e.g., via outward facing sensors of a head mounted device (HMD).
At block 1004, the method 1000 may further involve determining a head movement characteristic corresponding to a head. The head may be tracked based on second sensor data obtained via the one or more sensors. A reference position (e.g., shoulder joint, chest point, elbow joint, wrist joint, etc.) may be determined based on a head position (e.g., a tracked head pose). Determining the head movement characteristic may involve data from the one or more sensors. For example, this may involve using motion sensors and/or other sensors on an HMD worn on a head of a user to track head pose (e.g., 6DOF position and orientation) and using that head pose to infer probably shoulder positions, e.g., based on an assumed fixed relationship between head pose and shoulder position. In some implementations, the head movement characteristic may include a measure of motion of the head such as angular speed, etc.
In some implementations, the reference position is separate from the head. In some implementations, the reference position is a shoulder joint, position on or in the user's torso, an elbow joint, a position on or in the user's arm, etc. In some implementations, a reference position such as a shoulder joint position is inferred based on a position and orientation of the head of the user or a device worn on the head of the user. In some implementations, the reference position corresponds to a position on or within a torso of the user, wherein sensor data directly observing the torso is unavailable for at least a period of time.
At block 1006, the method 1000 may further involve determining a stabilization property based on the hand movement characteristic and the head movement characteristic. In one example, the stabilization property is determined based on detecting head rotation that is mostly independent of and/or significantly different than hand motion. The stabilization property may fix the position of a user interface element that is being controlled by the hand motion (e.g., allowing no movement of the user interface element) when a threshold head-to-hand motion differential is exceeded. The stabilization may dampen movement of the user interface element based on the relative amount head motion exceeds hand motion.
In some implementations, the stabilization property is determined based on elements described with respect to
At block 1008, the method 1000 may further involve repositioning a user interface element displayed by the device based on a three-dimensional (3D) position of the hand (e.g., pinch centroid location), a 3D position of the reference position (e.g., shoulder location), and the stabilization property. This may involve fixing or dampening movement of the user interface element in certain circumstances. Such fixing or dampening may prevent unintended movement of the user interface element, for example, in circumstances in which a user shakes their head side to side. Such side-to-side head motion may otherwise (without the fixing or dampening) of the UI element by causing movement of a reference position associated with the head's pose that does not correspond to the position of the portion of the user's body associated with the reference position, e.g., an inferred shoulder position may differ from the actual shoulder position as the user shakes their head side-to-side. Implementations disclosed herein stabilize user interface elements in these and other circumstances in which certain user activity is not intended to result in user interface element motion.
In some implementations, the 3D position of the hand is a pinch centroid location and the 3D position of the reference position is a shoulder joint location.
In some implementations, the repositioning of the user interface element determines to change a 3D position of the user interface element based on a ray direction from a shoulder joint location through the pinch centroid location.
In some implementations, the repositioning of the user interface element determines to change a 3D position of the UI element based on a distance between the shoulder joint location and the pinch centroid location.
In some implementations, the one or more communication buses 1804 include circuitry that interconnects and controls communications between system components. In some implementations, the one or more I/O devices and sensors 1806 include at least one of an inertial measurement unit (IMU), an accelerometer, a magnetometer, a gyroscope, a thermometer, one or more physiological sensors (e.g., blood pressure monitor, heart rate monitor, blood oxygen sensor, blood glucose sensor, etc.), one or more microphones, one or more speakers, a haptics engine, one or more depth sensors (e.g., a structured light, a time-of-flight, or the like), and/or the like.
In some implementations, the one or more output device(s) 1812 include one or more displays configured to present a view of a 3D environment to the user. In some implementations, the one or more displays 1812 correspond to holographic, digital light processing (DLP), liquid-crystal display (LCD), liquid-crystal on silicon (LCoS), organic light-emitting field-effect transitory (OLET), organic light-emitting diode (OLED), surface-conduction electron-emitter display (SED), field-emission display (FED), quantum-dot light-emitting diode (QD-LED), micro-electromechanical system (MEMS), and/or the like display types. In some implementations, the one or more displays correspond to diffractive, reflective, polarized, holographic, etc. waveguide displays. In one example, the device 1800 includes a single display. In another example, the device 1800 includes a display for each eye of the user.
In some implementations, the one or more output device(s) 1812 include one or more audio producing devices. In some implementations, the one or more output device(s) 1812 include one or more speakers, surround sound speakers, speaker-arrays, or headphones that are used to produce spatialized sound, e.g., 3D audio effects. Such devices may virtually place sound sources in a 3D environment, including behind, above, or below one or more listeners. Generating spatialized sound may involve transforming sound waves (e.g., using head-related transfer function (HRTF), reverberation, or cancellation techniques) to mimic natural soundwaves (including reflections from walls and floors), which emanate from one or more points in a 3D environment. Spatialized sound may trick the listener's brain into interpreting sounds as if the sounds occurred at the point(s) in the 3D environment (e.g., from one or more particular sound sources) even though the actual sounds may be produced by speakers in other locations. The one or more output device(s) 1812 may additionally or alternatively be configured to generate haptics.
In some implementations, the one or more image sensor systems 1814 are configured to obtain image data that corresponds to at least a portion of a physical environment. For example, the one or more image sensor systems 1814 may include one or more RGB cameras (e.g., with a complimentary metal-oxide-semiconductor (CMOS) image sensor or a charge-coupled device (CCD) image sensor), monochrome cameras, IR cameras, depth cameras, event-based cameras, and/or the like. In various implementations, the one or more image sensor systems 1814 further include illumination sources that emit light, such as a flash. In various implementations, the one or more image sensor systems 1814 further include an on-camera image signal processor (ISP) configured to execute a plurality of processing operations on the image data.
The memory 1820 includes high-speed random-access memory, such as DRAM, SRAM, DDR RAM, or other random-access solid-state memory devices. In some implementations, the memory 1820 includes non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid-state storage devices. The memory 1820 optionally includes one or more storage devices remotely located from the one or more processing units 1802. The memory 1820 comprises a non-transitory computer readable storage medium.
In some implementations, the memory 1820 or the non-transitory computer readable storage medium of the memory 1820 stores an optional operating system 1830 and one or more instruction set(s) 1840. The operating system 1830 includes procedures for handling various basic system services and for performing hardware dependent tasks. In some implementations, the instruction set(s) 1840 include executable software defined by binary information stored in the form of electrical charge. In some implementations, the instruction set(s) 1840 are software that is executable by the one or more processing units 1802 to carry out one or more of the techniques described herein.
The instruction set(s) 1840 include user interaction instruction set(s) 1842 configured to, upon execution, identify and/or interpret user gestures and other user activities, including by restricting or dampening user interface element repositioning, as described herein. The instruction set(s) 1840 may be embodied as a single software executable or multiple software executables.
Although the instruction set(s) 1840 are shown as residing on a single device, it should be understood that in other implementations, any combination of the elements may be located in separate computing devices. Moreover, the figure is intended more as functional description of the various features which are present in a particular implementation as opposed to a structural schematic of the implementations described herein. As recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated. The actual number of instructions sets and how features are allocated among them may vary from one implementation to another and may depend in part on the particular combination of hardware, software, and/or firmware chosen for a particular implementation.
It will be appreciated that the implementations described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and sub combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
As described above, one aspect of the present technology is the gathering and use of sensor data that may include user data to improve a user's experience of an electronic device. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies a specific person or can be used to identify interests, traits, or tendencies of a specific person. Such personal information data can include movement data, physiological data, demographic data, location-based data, telephone numbers, email addresses, home addresses, device characteristics of personal devices, or any other personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used to improve the content viewing experience. Accordingly, use of such personal information data may enable calculated control of the electronic device. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure.
The present disclosure further contemplates that the entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information and/or physiological data will comply with well-established privacy policies and/or privacy practices. In particular, such entities should implement and consistently use privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining personal information data private and secure. For example, personal information from users should be collected for legitimate and reasonable uses of the entity and not shared or sold outside of those legitimate uses. Further, such collection should occur only after receiving the informed consent of the users. Additionally, such entities would take any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices.
Despite the foregoing, the present disclosure also contemplates implementations in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware or software elements can be provided to prevent or block access to such personal information data. For example, in the case of user-tailored content delivery services, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services. In another example, users can select not to provide personal information data for targeted content delivery services. In yet another example, users can select to not provide personal information, but permit the transfer of anonymous information for the purpose of improving the functioning of the device.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data. For example, content can be selected and delivered to users by inferring preferences or settings based on non-personal information data or a bare minimum amount of personal information, such as the content being requested by the device associated with a user, other non-personal information available to the content delivery services, or publicly available information.
In some embodiments, data is stored using a public/private key system that only allows the owner of the data to decrypt the stored data. In some other implementations, the data may be stored anonymously (e.g., without identifying and/or personal information about the user, such as a legal name, username, time and location data, or the like). In this way, other users, hackers, or third parties cannot determine the identity of the user associated with the stored data. In some implementations, a user may access their stored data from a user device that is different than the one used to upload the stored data. In these instances, the user may be required to provide login credentials to access their stored data.
Numerous specific details are set forth herein to provide a thorough understanding of the claimed subject matter. However, those skilled in the art will understand that the claimed subject matter may be practiced without these specific details. In other instances, methods apparatuses, or systems that would be known by one of ordinary skill have not been described in detail so as not to obscure claimed subject matter.
Unless specifically stated otherwise, it is appreciated that throughout this specification discussions utilizing the terms such as “processing,” “computing,” “calculating,” “determining,” and “identifying” or the like refer to actions or processes of a computing device, such as one or more computers or a similar electronic computing device or devices, that manipulate or transform data represented as physical electronic or magnetic quantities within memories, registers, or other information storage devices, transmission devices, or display devices of the computing platform.
The system or systems discussed herein are not limited to any particular hardware architecture or configuration. A computing device can include any suitable arrangement of components that provides a result conditioned on one or more inputs. Suitable computing devices include multipurpose microprocessor-based computer systems accessing stored software that programs or configures the computing system from a general-purpose computing apparatus to a specialized computing apparatus implementing one or more implementations of the present subject matter. Any suitable programming, scripting, or other type of language or combinations of languages may be used to implement the teachings contained herein in software to be used in programming or configuring a computing device.
Implementations of the methods disclosed herein may be performed in the operation of such computing devices. The order of the blocks presented in the examples above can be varied for example, blocks can be re-ordered, combined, and/or broken into sub-blocks. Certain blocks or processes can be performed in parallel.
The use of “adapted to” or “configured to” herein is meant as open and inclusive language that does not foreclose devices adapted to or configured to perform additional tasks or steps. Additionally, the use of “based on” is meant to be open and inclusive, in that a process, step, calculation, or other action “based on” one or more recited conditions or values may, in practice, be based on additional conditions or value beyond those recited. Headings, lists, and numbering included herein are for ease of explanation only and are not meant to be limiting.
It will also be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first node could be termed a second node, and, similarly, a second node could be termed a first node, which changing the meaning of the description, so long as all occurrences of the “first node” are renamed consistently and all occurrences of the “second node” are renamed consistently. The first node and the second node are both nodes, but they are not the same node.
The terminology used herein is for the purpose of describing particular implementations only and is not intended to be limiting of the claims. As used in the description of the implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.
The foregoing description and summary of the invention are to be understood as being in every respect illustrative and exemplary, but not restrictive, and the scope of the invention disclosed herein is not to be determined only from the detailed description of illustrative implementations but according to the full breadth permitted by patent laws. It is to be understood that the implementations shown and described herein are only illustrative of the principles of the present invention and that various modification may be implemented by those skilled in the art without departing from the scope and spirit of the invention.
This application claims the benefit of U.S. Provisional Application Ser. No. 63/612,554 filed Dec. 20, 2023, which is incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63612554 | Dec 2023 | US |