This disclosure relates to human-computer interface technologies and computer privacy including, for example, to out-of-process hit-testing for electronic devices.
Operating system software generally provides an abstraction layer between user interface hardware and applications that run on top of the operating system. Multiple applications with corresponding user interface windows can be presented on a display managed by an operating system.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several implementations of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Aspects of this disclosure include techniques for providing increased privacy and/or efficiency in computer user input systems. While any computer input system may benefit from these techniques, systems that are capable of capturing personally identifiable user data, such as extended reality systems that track a user's hand gestures and/or eye gaze location, may particularly benefit from these techniques.
In an aspect of this disclosure, a user input and rendering system may receive and render user input, and may filter the user input available to a computer software application (or “app” herein). User input filtering enables a user's privacy from the application at least by restricting the application's access to the user's filtered input data. In some aspects, a computer user input and rendering system may be more trusted than an application running on the same computer because, for example the user input and rendering system is provided by a trusted operating system vendor, while the application is provided by a less trusted third-party application vendor. Moreover, a user may explicitly provide an authorization to the operating system to access user input information, and may not provide the same authorization to an application, such as a third part application.
In an aspect, a user may receive rendered feedback of the user's preliminary interactions with a user interface element, such a button or scroll bar, without making the preliminary interactions available to the application itself. Preliminary interactions with a UI element may include a user's intentional initial interactions with a user interface, such as exploration of the application's interface (for example to discover that a rectangle with a square inside is actually a scroll bar), and preliminary interactions may also include unintentional and even unconscious interactions with UI elements of the application (e.g., as the user's eyes move across a user interface without the user attending to the user interface).
Aspects of this disclosure provide techniques for efficiently retaining privacy of a user's preliminary UI interactions from an application while still providing rendered feedback of the preliminary interactions to the user. Rendered feedback of a preliminary interaction with a UI element might include, for example, a rendered audio cue when a hand gesture occurs near a UI element, or a UI element may start to glow when a user's gaze location is near or hovers near the UI element. In an aspect, when it is determined that a user intends to interact with an application (or a UI element of the application), then the user input may be considered no longer to be preliminary, and some user input may be provided to the application. In an aspect, an application may make a declaration or definition of a rendered feedback effect of preliminary interactions with a UI element before preliminary interactions occur, giving the application control over the nature of the preliminary interaction feedback effect, even when the application never learns of a preliminary interaction with its UI elements. The application may provide a declaration or definition of the effect to an operating system or another software component for managing rendering of the preliminary interactions. When the rendering of the effect is managed in a separate operating system process from the application providing the declaration or definition of the effect, the effect is referred to herein as a remote effect or an “out-of-process effect”. As described herein in connection with various examples and/or use cases, a separate operating system process from an application may provide, on behalf of the application, other out-of-process services, without providing information about the out-of-process services to the application (e.g., including out-of-process hit-testing, which can include out-of-process hit-testing redirecting, as described in further detail hereinafter).
As noted above, aspects of this disclosure may be applied to extended reality systems. A physical environment refers to a physical world that people can sense and/or interact with without aid of electronic devices. The physical environment may include physical features such as a physical surface or a physical object. For example, the physical environment corresponds to a physical park that includes physical trees, physical buildings, and physical people. People can directly sense and/or interact with the physical environment such as through sight, touch, hearing, taste, and smell. In contrast, an extended reality (XR) environment refers to a wholly or partially simulated environment that people sense and/or interact with via an electronic device. For example, the XR environment may include augmented reality (AR) content, mixed reality (MR) content, virtual reality (VR) content, and/or the like. With an XR system, user input may include tracking of a person's physical motions, and, in response the XR system may render an adjustment to one or more characteristics of one or more virtual objects simulated in the XR in a manner that comports with at least one law of physics. For example, if a user's hand gesture is rendered in an XR system at a location near but not touching a rendered application UI element, the UI element may be rendered with a glow or jiggle or otherwise indicate a preliminary interaction with that UI element without notifying the application of the preliminary user interaction. In an alternate example, if a user's hand gesture touches or grasps the UI element, it may be determined that the user deliberately intended to interact with the UI element, and then the user input indicating a touch or grasp of the UI element may be provided to the application.
Many different types of electronic user input and rendering systems may enable a human user to sense and/or interact with various XR environments. Examples include head mountable systems, projection-based systems, heads-up displays (HUDs), vehicle windshields having integrated display capability, windows having integrated display capability, displays formed as lenses designed to be placed on a person's eyes (e.g., similar to contact lenses), headphones/earphones, speaker arrays, input systems (e.g., wearable or handheld controllers with or without haptic feedback), smartphones, tablets, and desktop/laptop computers. A head mountable system may have one or more speaker(s) and an integrated opaque display.
Alternatively, a head mountable system may be configured to accept an external opaque display (e.g., a smartphone). The head mountable system may incorporate one or more imaging sensors to capture images or video of the physical environment, and/or one or more microphones to capture audio of the physical environment. Rather than an opaque display, a head mountable system may have a transparent or translucent display. The transparent or translucent display may have a medium through which light representative of images is directed to a person's eyes. The display may utilize digital light projection, OLEDs, LEDs, uLEDs, liquid crystal on silicon, laser scanning light source, or any combination of these technologies. The medium may be an optical waveguide, a hologram medium, an optical combiner, an optical reflector, or any combination thereof. In some implementations, the transparent or translucent display may be configured to become opaque selectively. Projection-based systems may employ retinal projection technology that projects graphical images onto a person's retina. Projection systems also may be configured to project virtual objects into the physical environment, for example, as a hologram or on a physical surface.
Aspects of this disclosure provide rendered confirmation to a user of user engagement with an application user interface prior to providing user input to the application. A user interface (UI) may be displayed to appear at a location in a physical environment that is remote from a user input device and/or remote from the display device that displays the user interface (e.g., in a three-dimensional XR display environment). For example, physical movement of a mouse input device may be rendered as movement of a mouse cursor on a display at a different physical location (perhaps just a few inches) from the physical mouse. As another example, a gaze cursor may indicate the location of a user's gaze, the location of the user's gaze being remote from one or more eye-facing cameras that are used to determine the gaze location. Similarly, a hand gesture may be rendered as a hand object in a virtual space and presented to a user. Thus, a rendered effect of user input may be rendered or presented to a user at a location that is physically separated from the physical location of the sensor that obtained sensed user input. In a virtual space or other extended reality environment, the user input may also occur at a location that is remote from the display component that displays an application UI and/or a representation of the user's hand to appear to the user to be at a location remote from the display component.
Moreover, a user may move their hand and/or direct their gaze at or near a rendered application user interface without intending to interact with the application UI (e.g., due to normal motion of the user around the physical environment, such as during a conversation with another person unassociated with the electronic device), and/or may perform hand gestures for interaction with one displayed application UI that are not intended to be provided to an application associated with another displayed application UI.
Aspects of the disclosure include receiving, at a system process of an electronic device from an application running on the electronic device, a definition of an effect for a first user interface (UI) element managed by the application. While the first user interface element is displayed by the electronic device without the effect applied to the first user interface element, the system process may receive a user input. In response to a determination that the user input corresponds to the first user interface element displayed without the effect, rendering, by the system process and without providing the user input to the application, the effect on the first UI element according to the definition.
The computing device 120 may be a smart phone, a tablet device, or a wearable device such as a smart watch or a head mountable portable system, that includes a display system capable of presenting a visualization of an extended reality environment to the user 110. The computing device 120 may be powered with a battery and/or any other power supply incorporated into the computing device 120 and/or coupled to the computing device 120 (e.g., by a cable). In an example, the display system of the computing device 120 provides a stereoscopic presentation of the extended reality environment, enabling a three-dimensional visual display of a rendering of a particular scene, to the user. In one or more implementations, instead of, or in addition to, utilizing the computing device 120 to access an extended reality environment.
The computing device 120 may include one or more cameras (e.g., visible light cameras, infrared cameras, etc.) Further, the computing device 120 may include various sensors that can detect user input including, but not limited to, cameras, image sensors, touch sensors, microphones, inertial measurement units (IMU), heart rate sensors, temperature sensors, Lidar sensors, radar sensors, sonar sensors, GPS sensors, Wi-Fi sensors, near-field communications sensors, etc.) Moreover, the computing device 120 may include hardware elements that can receive user input such as hardware buttons or switches. User input detected by such sensors and/or hardware elements correspond to various input modalities for interacting with virtual content displayed within a given extended reality environment. For example, such input modalities may include, but not limited to, facial tracking, eye tracking (e.g., gaze direction), hand tracking, gesture tracking, biometric readings (e.g., heart rate, pulse, pupil dilation, breath, temperature, electroencephalogram, olfactory), recognizing speech or audio (e.g., particular hotwords), and activating buttons or switches, etc. The computing device 120 may also detect and/or classify physical objects in the physical environment of the computing device 120.
In one or more implementations, the computing device 120 may be communicatively coupled to a base device. Such a base device may, in general, include more computing resources and/or available power in comparison with the computing device 120. In an example, the computing device 120 may operate in various modes. For instance, the computing device 120 can operate in a standalone mode independent of any base device.
The computing device 120 may also operate in a wireless tethered mode (e.g., connected via a wireless connection with a base device), working in conjunction with a given base device. The computing device 120 may also work in a connected mode where the computing device 120 is physically connected to a base device (e.g., via a cable or some other physical connector) and may utilize power resources provided by the base device (e.g., where the base device is charging the computing device 120 and/or providing power to the computing device 120 while physically connected).
When the computing device 120 operates in the wireless tethered mode or the connected mode, a least a portion of processing user inputs and/or rendering the extended reality environment may be offloaded to the base device thereby reducing processing burdens on the computing device 120. For instance, in an implementation, the computing device 120 works in conjunction with a base device to generate an extended reality environment including physical and/or virtual objects that enables different forms of interaction (e.g., visual, auditory, and/or physical or tactile interaction) between the user and the extended reality environment in a real-time manner. In an example, the computing device 120 provides a rendering of a scene corresponding to the extended reality environment that can be perceived by the user and interacted with in a real-time manner. Additionally, as part of presenting the rendered scene, the computing device 120 may provide sound, and/or haptic or tactile feedback to the user. The content of a given rendered scene may be dependent on available processing capability, network availability and capacity, available battery power, and current system workload.
The computing device 120 may also detect events that have occurred within the scene of the extended reality environment. Examples of such events include detecting a presence of a living being such as a person or a pet, a particular person, entity, or object in the scene.
As depicted in
Operation of system 200 may include receiving a user input, and outputting a rendered effect as feedback of the user input, and this may be performed outside of the app process 202 and without knowledge of app 260. App 260 may provide a description of its UI element(s) 250 to rendering system 270, and app 260 may provide a definition of its effects 252 to be rendered in response to future user input. App 260 may also provide a definition of a control style for one or more portion of a UI of the app 260. When effects component 280 receives user input corresponding to a definition of effects received from app 260, effects component 280 may cause effects 256 to be rendered by rendering system 270 as user output. When effects component 280 receives user input at a particular location while a UI window or UI element is displayed that has a control style defined by the app 260, the effects component 280 may redirect a user input to associated with another portion of the UI to the UI window or UI element having the defined control style (e.g., a foreground style).
In optional aspects of system 200, effects component 280 may learn of a location of a UI element (e.g., a location of a button or scroll bar), for example directly from the app 260 or via optional UI elements 258 message from rendering system 270. In one or more implementations, effects component 280 may then perform optional hit-testing 282 between a current location of a UI element and a current location of a user input. In one or more other implementations, hit-testing 282 may be performed by another system process that is separate from the effects component 280 and that performs hit-testing for multiple different purposes (e.g., for the effects component 280 and other components and/or processes). When a particular user input is identified as being associated with a particular UI element by hit-testing 282 (e.g., by the effects component 280 or another system process of the computing device 120) between a user input and a particular UI element of an app 260, hit-testing 282 may identify a preliminary interaction with the particular UI element of the app 260. When a preliminary interaction is identified, effects 256 may be rendered as user output without notifying app 260 of the user input or the identified interaction. Alternately, when an alternate user input is identified as a confirmed interaction by a user with a UI element, then effects component 280 or another system process may notify app 260 of the confirmed interaction as filtered user input message 254.
The definition of effects 252 may describe a variety of effects that an app instructs the system process 204 to render on one or more UI elements. Each effect defined in the definition of effects 252 may correspond to a certain type of user input interaction with a particular UI element. In aspects, the definition of effects 252 may describe a plurality of effects to be rendered in response to a single user input interaction with a single UI element. For example, definition of effects 252 may indicate that when a user's gaze is within a certain range of a particular button UI element, the button will start to glow to a certain brightness level and/or emit a sound, and when a gaze is in a closer distance range to the button, the UI element may both glow at a brighter level and also wiggle and/or generate a buzzing sound, and when a gaze is in a third closest range (perhaps gazing directly at the button and/or gazing directly at the button for at least a predefined dwell time, such as one second, one half second, one tenth of a second, or a smaller fraction of a second), the user interaction may be identified as a confirmed interaction.
In an alternate optional aspects, app 260 may provide effects component 280 with information describing the UI element of the app 260 directly (e.g., instead of the effects component 280 receiving UI information from the rendering system 270). Additionally, instead of hit-testing based on location, hit-testing 282 may more generally determine that a certain user input corresponds to an interaction with a particular user interface element. For example, a user's verbal audio input saying “the red button” may cause the hit-testing 282 to associate that audio input with a red button UI element.
In an aspect, a definition of an effect 252 may be a declarative definition. In this aspect, app 260 may provide all information necessary for a software component outside of app process 202, such as effects component 280, to cause the app's desired effect to be rendered without the app's knowledge or further participation. A declarative definition of an effect may include an identification of a UI element, for example provided by a user interface framework or the operating system, an identification of a triggering user input, and an identification of an effect to be rendered when the triggering user input corresponds to the first UI element. In one or more implementations, the UI element may be vended from one or more files, such as a Universal Scene Description (USDZ) file. In another implementation, the UI element referenced in the app's declarative definition may be provided by the operating system. In one or more implementations, the definition of effects 252 may include one or more definitions of control styles for one or more respective UI windows and/or UI elements. For example, a declarative definition of a control style may include an identification of a UI window and/or UI element to which the control style applies, an identification of one or more triggering user input that may occur away from the UI window and/or UI element to which the control style applies, and redirection instructions for redirecting the user input to the UI window and/or UI element to which the control style applies when the triggering user input is received. In one or more other implementations, the control style may be a control style flag or other value or parameter that indicates that any user input with other portions of an application UI should be redirected to the UI window and/or UI element to which the control style applies whenever the UI window and/or UI element to which the control style applies is displayed.
The method of
The UI element referenced throughout
In an aspect, the UI elements referenced throughout
In an aspect, UI elements may be specified as a layer tree or as a render tree. For example, application 260 may provide a layer tree or render tree of UI elements to rendering system 270, and application 260 may provide a definition of effects 252 including a layer tree or render tree of UI elements to an operating system or other component managing out-of-process UI effects.
The effect definition (box 302) may be received from a particular app running in an app process 350, and may describe an effect to be rendered on a UI element belonging to that particular app. However, implementations of this disclosure are not so limited. For example, the effect definition received in box 302 may be received indirectly from other sources, the effect definition may be a predetermined effect, or UI element may be a predetermined UI element. In one example, a button UI element may be predefined, and a “lift” effect may be predefined to occur when a gaze or hover user input is determined to correspond to the predefined button effect. In the case of predefined effects and/or predefined UI elements, the received effect definition may include a reference to the predefined elements without fully defining such element.
In an aspect, an effect definition may identify an effect to be rendered for a UI element by identifying a remote state (with predefined rendering property values) to be used when user input is determined to correspond to the UI element. In a further aspect, the effect definition may identify an animation to be used when transitioning between predefined remote states.
User input (such as is received in boxes 304 or 310 of
In as aspect, UI effects (such as is defined in box 302 and rendered in box 308 of
In an aspect, identifying a correspondence between a user input and a first UI element (box 306) may include identifying a plurality of user interface elements that the UI element may intersect with, and then determining that the plurality of identified UI elements includes the first UI element. For example, user input may be a gaze location. The system process may perform a hit-testing process (e.g., hit-testing 282) including identifying a list of potential UI elements that the user may be attempting to interact with, such as by identifying all UI elements from all applications that are located within a threshold distance of the gaze location and/or identifying all UI elements with which gaze direction intersects. If an effect definition includes an app UI element that is in that list, then the corresponding effect may be rendered on that app UI element. In a use case in which the gaze direction intersects with multiple UI elements each having effect definitions, the system process may render the effect on the UI element, among the multiple UI elements, that is displayed to appear closest to the user among the multiple UI elements and/or on the UI element, among the multiple UI elements, with which the gaze location most centrally intersects.
In an aspect, a single effect may be defined for a group of UI elements. For example, when user input correspondence is determined (306) for any of the UI elements in the group, the defined effect may be rendered on all UI elements in the group. In another example, user input correspondence in box 306 may be determined by only a subset of the UI elements in the group. The definition of an effect for a group of UI elements may include an indication of a which subset of UI elements in the group to hit-test against. If a user input location (such as a location of a user's gaze or hand gesture) is within a certain proximity (e.g., a predefined distance defined by the system process or in the effects definition) of any element in the indicated subset of UI elements, the effect may be rendered on all UI elements in the group (e.g., unless a foreground UI window or foreground UI element having a foreground control style has been defined and displayed).
In an aspect, a single effect definition may define a group of rendered effects. When correspondence to the UI element is determined (box 306), the UI element may be rendered with multiple different effects (box 308). For example, an effect definition for a button may include a combination of a lift effect (e.g., a movement of the button in a direction opposite pressing the button), a glow effect (e.g., brighten) and an audio cue (e.g., an audio tone emanating from the button).
In an aspect, an effect definition may combine multiple aspects described above. For example, a single effect definition may include multiple UI elements, multiple effects to render, and/or multiple types of user input that may trigger the rendering of the one or more effects.
In an aspect, only a summary or a subset of user input may be provided to the app (box 314), even after identifying a user's intention to interact with the app (box 312). For example, once a user's gaze-dwell time on an app button is greater than a threshold, the intention to interact with the app may be identified. However, instead of providing the user's gaze location or gaze dwell time to the app, the app may be provided with an indication that the user intended to press the app button without providing the app any knowledge about the user's gaze.
A list selection UI may be used to illustrate a more complex example of out-of-process effects that combines various aspects described above. A common list-selection UI task may include presenting a list of several options to a user, including an indication of a current selection. Examples of a list selection UI include a “combo box” in MacOS or a “drop-down list box” in Windows. Each of the options in the list may be rendered, for example, as a visual box containing text describing the option, and the list may be presented as a row of text boxes all without a highlight except a current selection highlighted with a first color.
In an example out-of-process implementation of a list selection UI, an application 260 may provide definitions of UI elements 250 including a row of text boxes to rendering system 270. The definition of each text box may include three predefined remote states including: 1) an unselected (idle) remote state with a highlight property set to a first color (or no highlight); 2) a current selection remote state with the highlight property set to a second color; and 3) a preliminary selection remote state with a highlight property set to a third color. The application 260 may further provide an indication that a first text box of the list is a current selection and application 260 may provide effects component 280 with definition of effects 252 that indicate how preliminary user input should affect the remote state of the text boxes. The rendering system 270 may render the list of text boxes each with its corresponding remote state, and the rendering system 270 may provide the definitions of text boxes via UI elements 258 message to the effects component 280 along with the current locations of each text box. Effects component 280 may use these definitions to manage the remote states of the text boxes. When effects component 280 receives user input including a location (e.g., a gaze location or location of a hand gesture), and effects component 280 may perform hit-testing of the user input location against the locations of the text boxes to determine a correspondence between the user input and one of the rendered text boxes. When the user input location correspond to the second text box in the list which is different from the currently selected first text box, the effects component 280 may cause rendering system 270 to render an effect 256 including a change in highlight property from the unselected remote state to a preliminary selection remote state, without notifying the app of the user input location or the change in remote state and highlight property of the second text box.
Alternately or in addition to the preliminary selection above, when user input indicates a user's completed selection, the app may be notified of the user's intent to change the current selection in the list. For example, after rendering the preliminary selection above and in response to receiving additional user input (box 310) that identifies the user's intention to change the current selection to the second box (for example, when a gaze-dwell duration threshold is exceeded on the second box, or a button press gesture occurs on the second box), the app may be notified of the user's intent to change the current selection in the list.
In an aspect, a remote effect definition may specify an animation for rendering when transitioning between remote states. For example, in the list selection UI example above, a remote state transition animation may be defined from the unselected remote state to the preliminary selection remote state to include a fading transition from the first color of highlight (no highlight for idle state) to the third color of highlight (for preliminary selection state).
In an aspect, the operations of method of
In other aspects, effects for preliminary user interactions may be identified as remote-UI effects, while effects for confirmed user inputs may be identified as app-managed-UI effects (box 552). In one or more implementations, app-managed UI effects based on confirmed user inputs may be managed, by the application, based on the location of the user input (e.g., as provided in box 314). In one or more implementations described in further detail hereinafter, the system process 300 may provide a re-directed user input, received at one location on the application UI, to the application in association with a different location on the application UI (e.g., on a foreground UI window for which a foreground control style has been provided). In the list-selection UI example described above, the preliminary selection rendering effect may be identified as a remote-UI effect, while a completed selection rendering effect may be identified as an app-managed UI effect.
As shown, the user interface windows 402 of the application 260 may include one or more user interface elements 406. As shown, the user interface window 422 may also include one or more user interface elements 426. As examples, the user interface elements 406 and/or the user interface elements 426 may include virtual buttons, virtual switches, virtual lists (e.g., drop-down lists), tabs, scrollbars, application icons, and/or other interactable elements. As shown in
As described herein, a system process 300 may render one or more effects for a user interface element 406 when the user input to the computing device 120 is received at the location 408 of a UI element 406 (e.g., when the user's gaze location 114 falls within the boundary of the UI element 406 or within a range of the boundary of the UI element 406, and/or when a user's hand or finger hovers over a location within the boundary of the UI element 406 or within a range of the boundary of the UI element 406, such as for a predetermined amount of time). As described herein, these effects can be out-of-process effects, pre-defined by the underlying application for the UI element 406, that are rendered without providing user information (e.g., the location of the user input) to the application.
In one or more implementations, a user input to a user interface element 406, and/or another interaction with an application UI, may cause a foreground window of the application UI to be presented. For example,
As indicated in
In the example of
For example, in a smartphone, a tablet, a laptop, or a desktop computer implementation of the computing device 120 in which the viewable display area is limited to the physical area of the display itself, the computing device may generate a non-visible display layer over the entire viewable display area between the foreground window 402F and all other display content, the non-visible display layer configured to receive any user inputs outside of the foreground window 402F. However, in systems with large displays and/or systems that display three-dimensional environments in which display content appears, to the user, to be at locations remote from the display itself, displaying such a virtual layer over the entire viewable display area can be undesirable, inefficient, or even ineffective.
For example, in the example of
For example,
Aspects of the subject technology provide for out-of-process hit-testing that can determine how to handle user inputs (e.g., including preliminary and/or confirmed user inputs as described herein) when a foreground window 402F is displayed together with one or more other UI windows and/or UI elements 406 of an underlying application, such as the application 260.
For example,
In a use case in which the user input that is received while the foreground window 402F is displayed is (e.g., as determined by the system process 300) is a preliminary user input, the system process 300 may abstain from providing (e.g., may not provide) a previously defined remote effect on the UI element 406 at which the user input was received.
In a use case in which the user input that is received while the foreground window 402F is displayed is (e.g., as determined by the system process 300) is a confirmed user input, the application that receives the re-directed indication of the user input in connection with the foreground window 402F may then determine whether to take any action and/or what action to take responsive to the user input. For example, the application 260 may close the foreground window 402F in response to the user input at the location 708, or the application 260 may take no action with respect to the foreground window 402F. In order to provide this foreground functionality for the foreground window 402F, the application 260 may provide a definition of a control style (e.g., a foreground control style) that defines the foreground functionality, to the system process 300 (e.g., in the remote definition of box 554).
In the example of
At block 904, the system process may receive, while the first user interface window and a second user interface window (e.g., a UI window 402 or an UI element 406) managed by the application are displayed by the electronic device, a user input at a location (e.g., a location 708) associated with the second UI window. For example, the location associated with the second UI window may be a location that has been determined, by the system process (e.g., using a hit-testing operation such as hit-testing 282 as described herein) to intersect with the second UI window and/or a UI element thereof).
In one or more implementations, the first user interface window may be displayed to appear at least partially in front of the second user interface window. In one or more implementations, the first user interface window may be spatially separated from the second user interface window. In one or more implementations, the first user interface window and the second user interface window may be displayed to appear at first and second respective locations that are remote from the electronic device, and the second respective location of the second user interface window may be closer to the electronic device than the first respective location of the first user interface window (e.g., as in the example of
At block 906, the system process may redirect, based on the definition of the control style, the user input at the location associated with the second UI window to the first UI window. For example, redirecting the user input at the location associated with the second UI window to the first UI window may include providing, by the system process to the application, an indication of the user input in association with a different location (e.g., different from the location at which the user input was received), the different location corresponding to the first UI window. As another example, redirecting the user input at the location associated with the second UI window to the first UI window may include providing, by the system process to the application, an indication of the user input and an indication to process the user input in association with the first UI window (e.g., without providing the location associated with the second UI window to the application).
In one or more implementations, the process 900 may also include (e.g., before or after receiving the user input that is redirected to the first UI window, such as before the first UI window is opened or after the first UI window is closed) receiving another user input at the location associated with the second UI window (e.g., the same location 708) while the second UI window is displayed and the first UI window is not displayed; and providing the other user input to the second UI window (e.g., by providing an indication to the application 260 that the other user input was received at a UI element at the location 708, such as without providing the location 708 to the application).
In one or more implementations, the process 900 may also include (e.g., before or after receiving the user input that is redirected to the first UI window, such as before the first UI window is opened or after the first UI window is closed) receiving another user input (e.g., a preliminary user input, such as a user gaze) at the location associated with the second UI window (e.g., the same location 708) while the second UI window is displayed and the first UI window is not displayed; and generating, by the system process responsive to the other user input at the location associated with the second UI window while the second UI window is displayed and the first UI window is not displayed, a visual or audio effect (e.g., a remote effect as described herein) for an element (e.g., the UI element 406 coinciding with the location 708) of the second UI window, the visual or audio effect based on another definition previously provided to the system process (e.g., at box 554) by the application. In one or more implementations, while the first user interface window and the second user interface window managed by the application are displayed by the electronic device, the system process may also abstain from providing the visual or audio effect for the element of the second UI window responsive to the user input received at the location associated with the second UI window.
In one or more implementations, the electronic device (e.g., the application 260) may, responsive to receiving the redirected user input by the first user interface window, close the first user interface window. In one or more implementations, the electronic device (e.g., the application 260) may, responsive to receiving the redirected user input by the first user interface window, ignore the user input (e.g., the application 260 may take no action responsive to the user input).
In one or more implementations, the system process may receive, while the first user interface window and the second user interface window managed by the application are displayed by the electronic device, another user input at another location (e.g., a location 808) unassociated with the first user interface window or the second user interface window; and abstain from providing the other user input to the application (e.g., as described herein in connection with
As described above, one aspect of the present technology is the gathering and use of data available from specific and legitimate sources for providing out-of-process hit-testing for electronic devices. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to identify a specific person. Such personal information data can include audio data, voice data, demographic data, location-based data, online identifiers, telephone numbers, email addresses, home addresses, encryption information, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information), date of birth, or any other personal information.
The present disclosure recognizes that the use of personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used for providing out-of-process hit-testing for electronic devices.
The present disclosure contemplates that those entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities would be expected to implement and consistently apply privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. Such information regarding the use of personal data should be prominently and easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate uses only. Further, such collection/sharing should occur only after receiving the consent of the users or other legitimate basis specified in applicable law. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations which may serve to impose a higher standard. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly.
Despite the foregoing, the present disclosure also contemplates embodiments in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the example of providing out-of-process hit-testing for electronic devices, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection and/or sharing of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing identifiers, controlling the amount or specificity of data stored (e.g., collecting location data at city level rather than at an address level or at a scale that is insufficient for facial recognition), controlling how data is stored (e.g., aggregating data across users), and/or other methods such as differential privacy.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
The bus 1010 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computing device 1000. In one or more implementations, the bus 1010 communicatively connects the one or more processing unit(s) 1014 with the ROM 1012, the system memory 1004, and the permanent storage device 1002. From these various memory units, the one or more processing unit(s) 1014 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1014 can be a single processor or a multi-core processor in different implementations.
The ROM 1012 stores static data and instructions that are needed by the one or more processing unit(s) 1014 and other modules of the computing device 1000. The permanent storage device 1002, on the other hand, may be a read-and-write memory device. The permanent storage device 1002 may be a non-volatile memory unit that stores instructions and data even when the computing device 1000 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1002.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1002. Like the permanent storage device 1002, the system memory 1004 may be a read-and-write memory device. However, unlike the permanent storage device 1002, the system memory 1004 may be a volatile read-and-write memory, such as random-access memory. The system memory 1004 may store any of the instructions and data that one or more processing unit(s) 1014 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1004, the permanent storage device 1002, and/or the ROM 1012. From these various memory units, the one or more processing unit(s) 1014 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1010 also connects to the input and output device interfaces 1006 and 1008. The input device interface 1006 enables a user to communicate information and select commands to the computing device 1000. Input devices that may be used with the input device interface 1006 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 1008 may enable, for example, the display of images generated by computing device 1000. Output devices that may be used with the output device interface 1008 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, or any other device for outputting information.
One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
It is well understood that the use of personally identifiable information should follow privacy policies and practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. In particular, personally identifiable information data should be managed and handled so as to minimize risks of unintentional or unauthorized access or use, and the nature of authorized use should be clearly indicated to users.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components (e.g., computer program products) and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112 (f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
The present application is a continuation of U.S. patent application Ser. No. 18/217,498, entitled “OUT-OF-PROCESS HIT-TESTING FOR ELECTRONIC DEVICES,” filed Jun. 30, 2023, which claims the benefit of priority to U.S. Provisional Patent Application No. 63/358,070, entitled, “OUT-OF-PROCESS EFFECTS FOR ELECTRONIC DEVICES”, filed on Jul. 1, 2022; U.S. Provisional Patent Application No. 63/402,435, entitled, “OUT-OF-PROCESS EFFECTS FOR ELECTRONIC DEVICES”, filed on Aug. 30, 2022; U.S. Provisional Patent Application No. 63/449,945, entitled, “OUT-OF-PROCESS AUDIO EFFECTS FOR ELECTRONIC DEVICES”, filed on Mar. 3, 2023; U.S. Provisional Patent Application No. 63/470,952, entitled, “OUT-OF-PROCESS EFFECTS FOR ELECTRONIC DEVICES”, filed on Jun. 4, 2023; and U.S. Provisional Patent Application No. 63/470,955, entitled, “OUT-OF-PROCESS HIT-TESTING FOR ELECTRONIC DEVICES”, filed on Jun. 4, 2023, the disclosure of each which is hereby incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63470955 | Jun 2023 | US | |
63470952 | Jun 2023 | US | |
63449945 | Mar 2023 | US | |
63402435 | Aug 2022 | US | |
63358070 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 18217498 | Jun 2023 | US |
Child | 18774742 | US |