The present description relates generally to gesture-based control of electronic devices, including, for example, probabilistic gesture control with feedback for electronic devices.
Electronic devices such as wearable electronic devices are often provided with input components such as keyboards, touchpads, touchscreens, or buttons that enable a user to interact with the electronic device. In some cases, an electronic device can be configured to accept a gesture input from a user for controlling the electronic device.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several embodiments of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology may be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, it will be clear and apparent to those skilled in the art that the subject technology is not limited to the specific details set forth herein and may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Aspects of the subject disclosure provide for gesture-based control of electronic devices. As but a few examples, gesture-based control can include pinching and turning a virtual dial (e.g., a virtual volume knob) or pinching and sliding a virtual slider (e.g., moving a virtual dimmer switch, or scrolling through a movie or song timeline).
In one or more implementations, the disclosed gesture-based control leverages multiple types of sensor data (e.g., electromyography (EMG) and inertial measurement unit (IMU) data), processed, in part, using multiple respective neural networks, to generate a gesture prediction. The gesture prediction may be determined along with a probability that the predicted gesture is being performed. Various additional processing features enhance the ability of the disclosed gesture detection operations to identify (e.g., using this multi-modal data) when to activate gesture control based on a gesture prediction. In one or more implementations, when a likelihood of a user's intent to perform a particular gesture reaches a threshold, gesture-based control can be activated. The disclosed gesture-based control operations can also include providing adaptive visual, auditory, and/or haptic feedback that indicates a current estimate of the likelihood of a user's intent to provide gesture-based input. The disclosed gesture-based control can also include a voice-activated or gesture-activated trigger that informs subsequent gesture detection for activation of gesture-based control.
The network environment 100 includes electronic devices 102, 103, 104, 105, 106 and 107 (hereinafter “the electronic devices 102-107”), a local area network (“LAN”) 108, a network 110, and one or more servers, such as server 114.
In one or more implementations, one, two, or more than two (e.g., all) of the electronic devices 102-107 may be associated with (e.g., registered to and/or signed into) a common account, such as an account (e.g., user account) with the server 114. As examples, the account may be an account of an individual user or a group account. As illustrated in
In one or more implementations, the electronic devices 102-107 may form part of a connected home environment 116, and the LAN 108 may communicatively (directly or indirectly) couple any two or more of the electronic devices 102-107 within the connected home environment 116. Moreover, the network 110 may communicatively (directly or indirectly) couple any two or more of the electronic devices 102-107 with the server 114, for example, in conjunction with the LAN 108. Electronic devices such two or more of the electronic devices 102-107 may communicate directly over a secure direct connection in some scenarios, such as when electronic device 106 is in proximity to electronic device 105. Although the electronic devices 102-107 are depicted in
In one or more implementations, the LAN 108 may include one or more different network devices/network medium and/or may utilize one or more different wireless and/or wired network technologies, such as Ethernet, optical, Wi-Fi, Bluetooth, Zigbee, Powerline over Ethernet, coaxial, Ethernet, Z-Wave, cellular, or generally any wireless and/or wired network technology that may communicatively couple two or more devices.
In one or more implementations, the network 110 may be an interconnected network of devices that may include, and/or may be communicatively coupled to, the Internet. For explanatory purposes, the network environment 100 is illustrated in
One or more of the electronic devices 102-107 may be, for example, a portable computing device such as a laptop computer, a smartphone, a smart speaker, a peripheral device (e.g., a digital camera, headphones), a digital media player, a tablet device, a wearable device such as a smartwatch or a band device, a connected home device, such as a wireless camera, a router and/or wireless access point, a wireless access device, a smart thermostat, smart light bulbs, home security devices (e.g., motion sensors, door/window sensors, etc.), smart outlets, smart switches, and the like, or any other appropriate device that includes and/or is communicatively coupled to, for example, one or more wired or wireless interfaces, such as WLAN radios, cellular radios, Bluetooth radios, Zigbee radios, near field communication (NFC) radios, and/or other wireless radios.
By way of example, in
In one or more implementations, one or more of the electronic devices 102-107 may include one or more machine learning models that provides an output of data corresponding to a prediction or transformation or some other type of machine learning output. As shown in
In one or more implementations, the server 114 may be configured to perform operations in association with user accounts such as: storing data (e.g., user settings/preferences, files such as documents and/or photos, etc.) with respect to user accounts, sharing and/or sending data with other users with respect to user accounts, backing up device data with respect to user accounts, and/or associating devices and/or groups of devices with user accounts.
One or more of the servers such as the server 114 may be, and/or may include all or part of the device discussed below with respect to
The device 200 may include a processor 202, a memory 204, a communication interface 206, an input device 207, an output device 210, and one or more sensors 212. The processor 202 may include suitable logic, circuitry, and/or code that enable processing data and/or controlling operations of the device 200. In this regard, the processor 202 may be enabled to provide control signals to various other components of the device 200. The processor 202 may also control transfers of data between various portions of the device 200. Additionally, the processor 202 may enable implementation of an operating system or otherwise execute code to manage operations of the device 200.
The memory 204 may include suitable logic, circuitry, and/or code that enable storage of various types of information such as received data, generated data, code, and/or configuration information. The memory 204 may include, for example, random access memory (RAM), read-only memory (ROM), flash, and/or magnetic storage.
In one or more implementations, the memory 204 may store one or more feature extraction models, one or more gesture prediction models, one or more gesture detectors, one or more (e.g., virtual) controllers (e.g., sets of gestures and corresponding actions to be performed by the device 200 or another electronic devices when specific gestures are detected), voice assistant applications, and/or other information (e.g., locations, identifiers, location information, etc.) associated with one or more other devices, using data stored locally in memory 204. Moreover, the input device 207 may include suitable logic, circuitry, and/or code for capturing input, such as audio input, remote control input, touchscreen input, keyboard input, etc. The output device 210 may include suitable logic, circuitry, and/or code for generating output, such as audio output, display output, light output, and/or haptic and/or other tactile output (e.g., vibrations, taps, etc.).
The sensors 212 may include one or more ultra-wide band (UWB) sensors, one or more inertial measurement unit (IMU) sensors (e.g., one or more accelerometers, one or more gyroscopes, one or more compasses and/or magnetometers, etc.), one or more image sensors (e.g., coupled with and/or including an computer-vision engine), one or more electromyography (EMG) sensors, optical sensors, light sensors, image sensors, pressure sensors, strain gauges, lidar sensors, proximity sensors, ultrasound sensors, radio-frequency (RF) sensors, platinum optical intensity sensors, and/or other sensors for sensing aspects of the environment around and/or in contact with the device 200 (e.g., including objects, devices, and/or user movements and/or gestures in the environment). The sensors 212 may also include motion sensors, such as inertial measurement unit (IMU) sensors (e.g., one or more accelerometers, one or more gyroscopes, and/or one or more magnetometers) that sense the motion of the device 200 itself.
The communication interface 206 may include suitable logic, circuitry, and/or code that enables wired or wireless communication, such as between any of the electronic devices 102-107 and/or the server 114 over the network 110 (e.g., in conjunction with the LAN 108). The communication interface 206 may include, for example, one or more of a Bluetooth communication interface, a cellular interface, an NFC interface, a Zigbee communication interface, a WLAN communication interface, a USB communication interface, or generally any communication interface.
In one or more implementations, one or more of the processor 202, the memory 204, the communication interface 206, the input device 207, and/or one or more portions thereof, may be implemented in software (e.g., subroutines and code), may be implemented in hardware (e.g., an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), a Programmable Logic Device (PLD), a controller, a state machine, gated logic, discrete hardware components, or any other suitable devices) and/or a combination of both.
In one or more implementations, memory 204 may store a machine learning system that includes one or more machine learning models that may receive, as inputs, outputs from one or more of the sensor(s) 212. The machine learning models may have been trained based on outputs from various sensors corresponding to the sensors(s) 212, in order to detect and/or predict a user gesture. When the device 200 detects a user gesture using the sensor(s) 212 and the machine learning models, the device 200 may perform a particular action (e.g., raising or lowering a volume of audio output being generated by the device 200, scrolling through video or audio content at the device 200, other actions at the device 200, and/or generating a control signal corresponding to a selected device and/or a selected gesture-control element for the selected device, and transmitting the control signal to the selected device). In one or more implementations, the machine learning models may be trained based on a local sensor data from the sensor(s) 212 at the device 200, and/or based on a general population of devices and/or users. In this manner, the machine learning models can be re-used across multiple different users even without a priori knowledge of any particular characteristics of the individual users in one or more implementations. In one or more implementations, a model trained on a general population of users can later be tuned or personalized for a specific user of a device such as the device 200.
In one or more implementations, the device 200 may include various sensors at various locations for determining proximity to one or more devices for gesture control, for determining relative or absolute locations of the device(s) for gesture control, and/or for detecting user gestures (e.g., by providing sensor data from the sensor(s) to machine learning a machine learning system).
In the example of
As shown in
Housing 302 and band 304 may be attached together at interface 308. Interface 308 may be a purely mechanical interface or may include an electrical connector interface between circuitry within band 304 and circuitry 306 within housing 302 in various implementations. Processing circuitry such as the processor 202 of circuitry 306 may be communicatively coupled to one or more of sensors 212 that are mounted in the housing 302 and/or one or more of sensors 212 that are mounted in the band 304 (e.g., via interface 308).
In the example of
In one or more implementations, one or more of the sensors 212 may be mounted on or to the sidewall 310 of housing 302. In the example of
Although various examples, including the example of
Although not visible in
It is appreciated that, although an example implementation of the device 200 in a smartwatch is described herein in connection with various examples, these examples are merely illustrative, and the device 200 may be implemented in other form factors and/or device types, such as in a smartphone, a tablet device, a laptop computer, another wearable electronic device (e.g., a head worn device) or any other suitable electronic device that includes, for example machine learning system for detecting gestures.
In general, sensors for detecting gestures may be any sensors that generate input signals (e.g., to a machine learning system, such as to machine learning models such as feature extraction models) responsive to physical movements and/or positioning of a user's hand, wrist, arm, and/or any other suitable portion of a user's body. For example, to generate the input signals, the sensors may detect movement and/or positioning of external and/or internal structures of the user's hand, wrist, and/or arm during the physical movements of the user's hand, wrist, and/or arm. For example, light reflected from or generated by the skin of the user can be detected by one or more cameras or other optical or infrared sensors.
As another example, electrical signals generated by the muscles, tendons or bones of the wearer can be detected (e.g., by electromyography sensors). As another example, ultrasonic signals generated by an electronic device and reflected from the muscles, tendons or bones of the user can be detected by an ultrasonic sensor. In general, EMG sensors, ultrasonic sensors, cameras, IMU sensors (e.g., an accelerometer, a gyroscope and/or a magnetometer), and/or other sensors may generate signals that can be provided to machine-learning models of a gesture detection system to identify a position or a motion of the wearer's hand, wrist, arm, and/or other portion of the user's body, and thereby detect user gestures.
In the example of
In this example, the fraction of the perimeter of the virtual dial 400 that is surrounded by the visual indicator 402 may scale with the determined likelihood that the user is performing the gesture 408 (e.g., is intentionally performing the gesture 408). That is, in one or more implementations, the visual indicator 402 may be a dynamically updating visual indicator of a dynamically updating likelihood of an element control gesture (e.g., the gesture 408) being performed by the user. For example, the electronic device 107 may dynamically scale an overall size (e.g., a circumferential length in this example) of the visual indicator with the dynamically updating likelihood.
In the example of
In the use case of
Although the example of
In the example of
In one or more implementations, one or more of the sensor data 702, the sensor data 704, and the sensor data 706 may have characteristics (e.g., noise characteristics) that significantly differ from the characteristics of others of the sensor data 702, the sensor data 704, and the sensor data 706. For example, EMG data (e.g., sensor data 706) is susceptible to various sources of noise arising from nearby electrical devices, or bad skin-to-electrode contact. Therefore, EMG data can be significantly noisier than accelerometer data (e.g., sensor data 702) or gyroscope data (e.g., sensor data 704). This can be problematic for training a machine learning model to detect a gesture based on these multiple different types of data with differing characteristics.
The system of
For example, the machine learning model 708 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 702, the machine learning model 710 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 704, and the machine learning model 712 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 706. As shown, machine learning model 708 may output a feature vector 714 containing features extracted from sensor data 702, machine learning model 710 may output a feature vector 716 containing features extracted from sensor data 704, and machine learning model 708 may output a feature vector 718 containing features extracted from sensor data 706. In this example, three types of sensor data are provided to three feature extractors, however, more or less than three types of sensor data may be used in conjunction with more or less than three corresponding feature extractors in other implementations.
As shown in
In order to generate the combined input vector 722 for the gesture prediction model 724, the intermediate processing operations 720 may perform modality dropout operations, average pooling operations, modality fusion operations and/or other intermediate processing operations. For example, the modality dropout operations may periodically and temporarily replace one, some, or all of the feature vector 714, the feature vector 716, or the feature vector 718 with replacement data (e.g., zeros) while leaving the others of the feature vector 714, the feature vector 716, or the feature vector 718 unchanged. In this way, the modality dropout operations can prevent the gesture prediction model from learning to ignore sensor data from one or more of the sensors (e.g., by learning to ignore, for example, high noise data when other sensor data is low noise data). Modality dropout operations can be performed during training of the gesture prediction model 724, and/or during prediction operations with the gesture prediction model 724. In one or more implementations, the modality dropout operations can improve the ability of the machine learning system 700 to generate reliable and accurate gesture predictions using multi-mode sensor data. In one or more implementations, the average pooling operations may include determining one or more averages (or other mathematical combinations, such as medians) for one or more portions of the feature vector 714, the feature vector 716, and/or the feature vector 718 (e.g., to downsample one or more of the feature vector 714, the feature vector 716, and/or the feature vector 718 to a common size with the others of the feature vector 714, the feature vector 716, and/or the feature vector 718, for combination by the modality fusion operations). In one or more implementations, the modality fusion operations may include combining (e.g., concatenating) the features vectors processed by the modality dropout operations and the average pooling operations to form the combined input vector 722.
The gesture prediction model 724 may be a machine learning model that has been trained to predict a gesture that is about to be performed or that is being performed by a user, based on a combined input vector 722 that is derived from multi-modal sensor data. In one or more implementations, the machine learning system 700 of the gesture control system 701 (e.g., including the machine learning model 708, the machine learning model 710, the machine learning model 712, and the gesture prediction model 724) may be trained on sensor data obtained by the device in which the machine learning system 700 is implemented and from the user of that device, and/or sensor data obtained from multiple (e.g., hundreds, thousands, millions) of devices from multiple (e.g., hundreds, thousands, millions) of anonymized users, obtained with the explicit permission of the users. In one or more implementations, the gesture prediction model 724 may output a prediction 726. In one or more implementations, the prediction 726 may include one or more predicted gestures (e.g., of one or multiple gestures that the model has been trained to detect), and may also output a probability that the predicted gesture has been detected. In one or more implementations, the gesture prediction model may output multiple predicted gestures with multiple corresponding probabilities. In one or more implementations, the machine learning system 700 can generate a new prediction 726 based on new sensor data periodically (e.g., once per second, ten times per second, hundreds of times per second, once per millisecond, or with any other suitable periodic rate).
As shown in
For example, the gesture detector 730 may periodically generate a dynamically updating likelihood of an element control gesture (e.g., a pinch-and-hold gesture), such as by generating a likelihood for each prediction 726 or for aggregated sets of predictions 726 (e.g., in implementations in which temporal smoothing is applied). For example, when an element control gesture is the highest probability gesture from the gesture prediction model 724, the gesture detector 730 may increase the likelihood of the element control gesture based on the probability of that gesture from the gesture prediction model 724, and based on the gesture detection factor. For example, the gesture detection factor may be a gesture-detection sensitivity threshold. In one or more implementations, the gesture-detection sensitivity threshold may be a user-controllable threshold that the user can change to set the sensitivity of activating gesture control to the user's desired level. In one or more implementations, the gesture detector 730 may increase the likelihood of the element control gesture based on the probability of that gesture from the gesture prediction model 724, and based on the gesture detection factor by increasing the likelihood by an amount corresponding to a higher of the probability of the element control gesture and a fraction (e.g., half) of the gesture-detection sensitivity threshold.
In a use case in which the element control gesture is not the gesture with the highest probability from the gesture prediction model 724 (e.g., the gesture prediction model 724 has output the element control gesture with a probability that is lower than the probability of another gesture predicted in the output of the gesture prediction model 724), the gesture detector 730 may decrease the likelihood of the element control gesture by an amount corresponding to a higher of the probability of whichever gesture has the highest probability from the gesture prediction model 724 and a fraction (e.g., half) of the gesture-detection sensitivity threshold. In this way, the likelihood can be dynamically updated up or down based on the output of the gesture prediction model 724 and the gesture detection factor (e.g., the gesture-detection sensitivity threshold).
As each instance of this dynamically updating likelihood is generated, the likelihood (e.g., or an aggregated likelihood based on several recent instances of the dynamically updating likelihood, in implementations in which temporal smoothing is used) may be compared to the gesture-detection sensitivity threshold. When the likelihood is greater than or equal to the gesture-detection sensitivity threshold, the gesture detector 730 may determine that the gesture has been detected and may provide an indication of the detected element control gesture to a control system 732. When the likelihood is less than the gesture-detection sensitivity threshold, the gesture detector 730 may determine that the gesture has not been detected and may not provide an indication of the detected element control gesture to a control system 732. In one or more implementations, providing the indication of the detected element control gesture may activate gesture-based control of an element at the electronic device 107 or another electronic device, such as the electronic device 106. In these examples, the gesture-detection sensitivity threshold is used in the adjusting (e.g., increasing or decreasing) of the likelihood, and as the threshold to which the likelihood is compared. In one or more other implementations, the gesture control factor may include a likelihood adjustment factor is used in the adjusting (e.g., increasing or decreasing) of the likelihood and that is separate from the gesture-detection sensitivity threshold to which the (e.g., adjusted) likelihood is compared for gesture control activation.
Throughout the dynamic updating of the likelihood by the gesture detector 730 (e.g., based on output of the gesture detection model 724 and based on the likelihood adjustment factor and/or the gesture-detection sensitivity threshold), the dynamically updating likelihood may be provided to a display controller. For example, the display controller (e.g., an application-level or system-level process with the capability of controlling display content for display at the electronic device 107 or the electronic device 106) may generate and/or update a visual indicator, in accordance with the likelihood (e.g., as described, for example, above in connection with
In various implementations, the control system 732 and/or the display controller may be implemented as, or as part of, a system-level process at an electronic device or as, or as part of an application (e.g., a media player application that controls playback of audio and/or video content, or a connected home application that controls smart appliances, light sources, or the like). In various implementations, the display controller may be implemented at the electronic device with the gesture prediction model 724 and the gesture detector 730, or may be implemented at a different device (e.g., electronic device 106). In one or more implementations, the control system 732 and the display controller may be implemented separately or as part of a common system or application process.
Once the element control gesture is detected and the gesture-based control is activated, gesture control system 701 of
As discussed herein in connection with, for example,
In the example of
In the example of
As shown in
In various implementations, updating the indicator 904 of the current setting (and controlling the element accordingly) may be a relative change that corresponds to a change in the element control gesture relative to the initial orientation the element control gesture was in at the time gesture control was activated, or may be a change that depends on a difference between the orientation of the element control gesture (e.g., as indicated by the orientation indicator 900) and the current setting (e.g., as indicated by the indicator 904). For example, in the example of
However, in a use case in which the location of the orientation indicator 900 and the location of the indicator 904 are initially different (e.g., when gesture control is activated), in a relative motion update, the indicator 904 (e.g., and the underlying element control) may be changed by an amount corresponding to an amount of the changing orientation of the element control gesture relative to an initial orientation of the element control gesture when the dynamically updating likelihood reaches the threshold likelihood. In this example, the change in location of the orientation indicator 900 and the change in the location of the indicator 904 may be by a same amount, but at different absolute locations.
In one or more other implementations, if the location of the orientation indicator 900 and the location of the indicator 904 are initially different (e.g., when gesture control is activated), the indicator 904 (e.g., and the underlying element control) may be changed based on the difference between the location of the indicator 904 and the orientation of the element control gesture. For example, for a difference between the initial locations of the orientation indicator 900 (e.g., and the underlying gesture) and the indicator 904 that is less than a first difference threshold, no change in the location of the indicator 904 maybe may be made. For a difference between the initial locations of the orientation indicator 900 and the indicator 904 that is greater than the first difference threshold and smaller than a second difference threshold, the location of the indicator 904 may be snapped to the location of the orientation indicator 900. For a difference between the initial locations of the orientation indicator 900 and the indicator 904 that is greater than the second difference threshold, the location of the indicator 904 may be smoothly moved toward the location of the orientation indicator 900 (e.g., with a speed that is scaled with the likelihood of the rotation gesture and/or the amount of the difference in the initial locations).
In various examples described herein, a visual indicator 402 is described. However, it is also appreciated that, in one or more implementations, auditory and/or haptic indicators of the likelihood of an element control gesture may also, or alternatively, be provided. As examples, a series of auditory and/or haptic taps may be provided with magnitudes, frequencies, and/or variances that scale with, or inversely to, the likelihood. In one example, haptic and/or auditory taps may be output to indicate active element control. In one or more other examples, haptic and/or auditory taps may be output with a frequency that increases at low and high likelihoods, and that decreases at likelihoods between the low and high likelihoods. In one or more other examples, haptic and/or auditory taps may be output with a frequency that decreases at low and high likelihoods, and that increases at likelihoods between the low and high likelihoods. In this way, the user can be guided by the frequency of the taps to commit to, or release, a potential element control gesture.
In one or more use cases, erroneous gesture control can occur during a time period in which the gesture control is active and the user rapidly releases the element control gesture or performs a complex movement that includes a rapid motion that is not intended to be a control gesture. This can often coincide, for example, with a drop of the user's arm, or other rapid motion of the electronic device 107 when the electronic device 107 is worn on the user's wrist. In one or more implementations, the electronic device 107 may detect (e.g., using an accelerometer and/or a gyroscope, such as with or without providing the accelerometer and/or gyroscope data to a machine learning model) motion of the electronic device 107 greater than a threshold amount of motion, and may temporarily disable or lock gesture-based control while the motion of the device is greater than the threshold amount of motion. In this example, gesture-based control can be resumed when the motion of the electronic device 107 falls below the threshold amount of motion.
In one or more use cases, even when the motion of the device is below the threshold amount of motion for disabling or locking gesture-based control, motion of the device that includes the sensors for gesture detection during gesture-based control operations can affect the process of gesture detection. For example, when the sensors are part of a smartwatch or other wearable device, sensors (e.g., EMG sensors) can be temporarily dislodged from a sensing position relative to the skin of the user/wearer. In one or more implementations, an electronic device, such as the electronic device 107, can modify the gesture prediction and/or gesture detection operations based on an amount of motion of the electronic device 107 (e.g., while the amount of motion changes and remains below the threshold amount of motion that causes locking or disabling of gesture control).
For example, the electronic device 107 may obtain motion information (e.g., IMU data, such as accelerometer data and/or gyroscope data) from a motion sensor (e.g., an IMU sensor, such as an accelerometer or a gyroscope) of the device. Based on the motion information, the electronic device may modify the gesture-detection sensitivity threshold.
As shown in a timeline 1003, when the likelihood decreases due to the device motion, the electronic device may, during a period of time 1007 within the first period of time 1000 during which the element control gesture is being performed, determine incorrectly that no element control gesture is being performed. In the timeline 1005 of
In one or more implementations, the gesture control system 701 may be modified responsive to a trigger for gesture-based control. In various implementations, the trigger may be an activation gesture, a voice input, or other trigger.
In the example of
In the example of
As shown in
For example, the voice assistant application 1204 may provide an indication to the ML system 700 and/or the gesture detector 730 of the identified controller and/or one or more corresponding element control gestures for the identified controller. In one or more implementations, the electronic device 107 may modify, based on the identified controller and/or the corresponding gesture(s), the gesture control system 701 (e.g., the gesture detector 730 or the ML system 700) that is trained to identify one or more gestures based on sensor data from one or more sensors.
The gesture detector 730 may then determine a likelihood of the element control gesture being performed by a user based on an output of the modified gesture detection system, and the control system 732 may performing one or more device control operations to control the electronic device 107 or the other device (e.g., a second device different from the electronic device 107, such as the electronic device 106 or another device) based on the likelihood of the element control gesture being performed by the user. In one or more implementations, modifying the gesture detection system may include reducing the gesture-detection sensitivity threshold for the element control gesture. In one or more implementations, modifying the gesture detection system may include modifying a weight that is applied to the likelihood by the gesture detection system (e.g., by the gesture detector 730). In one or more implementations, modifying the gesture detection system may include modifying one or more weights of one or more trained machine learning models (e.g., gesture prediction model 724) of the machine learning system.
In the example of
At block 1304, responsive to providing the sensor data to a machine learning system (e.g., machine learning system 700), an output (e.g., a prediction, such as prediction 726) may be obtained from the machine learning system. The output may indicate one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures. In one or more implementations, the output may include multiple predicted gestures and multiple corresponding probabilities for that predicted gesture. For example, the output may include a first probability that an element control gesture (e.g., a pinch-and-hold gesture or other element control gesture) is being (or is about to be) performed by a user of the device, and a second probability that a release gesture or no gesture is being (or is about to be) performed.
At block 1306, based on the output of the machine learning system and a gesture-detection factor, a likelihood of an element control gesture being performed by a user of a device including the sensor may be determined. For example, the gesture-detection factor may include a gesture-detection sensitivity threshold and/or a likelihood adjustment factor. In one or more implementations, the gesture-detection sensitivity threshold may be adjustable by the device (e.g., based on motion of the device) and/or by a user of the device (e.g., to increase or decrease the sensitivity to detection of element control gestures). As one illustrative example, the element control gesture may be a pinch-and-hold gesture.
In one or more implementations, determining the likelihood based on the output and the gesture-detection factor may include determining that a first one of the one or more respective probabilities that corresponds to the element control gesture is a highest one of the one or more respective probabilities, and increasing the likelihood by an amount corresponding to a higher of the first one of the one or more respective probabilities and a fraction (e.g., a quarter, half, etc.) of the gesture-detection sensitivity threshold. For example, the electronic device 107 may increase the likelihood of a pinch-and-hold gesture by amount that corresponds to a higher of the probability of the pinch-and-hold gesture and half of the gesture-detection sensitivity threshold.
In one or more use cases, determining the likelihood based on the output and the gesture-detection factor may also include, after determining that the first one of the one or more respective probabilities that corresponds to the element control gesture is the highest one of the one or more respective probabilities, determining that a second one of the one or more respective probabilities that corresponds to a gesture other than the element control gesture is the highest one of the one or more respective probabilities, and decreasing the likelihood by an amount corresponding to a higher of the second one of the one or more respective probabilities and a fraction (e.g., a quarter, half, etc.) of the gesture-detection sensitivity threshold. For example, after the pinch-and-hold gesture has been detected (e.g., as the pinch-and-hold gesture is being released), the electronic device 107 may determine that a probability of a release gesture is higher than the probability of the pinch-and-hold gesture, and the electronic device 107 may decrease the likelihood of the pinch-and-hold gesture by an amount, such as a higher of the probability of the release gesture and a faction of the gesture-detection sensitivity threshold.
At block 1308, the process 1300 may include activating, based on the likelihood and the gesture-detection factor, gesture-based control of an element according to the element control gesture. For example, activating the gesture-based control of the element based on the likelihood and the gesture-detection factor may include activating the gesture-based control of the element based on a comparison of the likelihood with the gesture-detection sensitivity threshold. In one or more implementations, the element may include a virtual knob, a virtual dial, a virtual slider, or a virtual remote control. In examples in which the element is a virtual remote control, the virtual remote control may have multiple associated element control gestures that can be detected by the gesture-detection system, such as a button press gesture and a swipe gesture. In one or more implementations, the device may be a first device, and activating the gesture-based control of the element according to the element control gesture may include activating the gesture-based control of the element at the first device or at a second device different from the first device.
In one or more implementations, the process 1300 may also include obtaining motion information from a motion sensor (e.g., an accelerometer and/or a gyroscope) of the device, and modifying the gesture-detection sensitivity threshold based on the motion information. For example, modifying the gesture-detection sensitivity threshold may include decreasing the gesture-detection sensitivity threshold responsive to an increase in motion of the device (e.g., due to movement of the device relative to the user and/or the user's skin resulting from the motion of the user's arm while performing the element control gesture) indicated by the motion information (e.g., as described herein in connection with
In one or more implementations, modifying the gesture-detection sensitivity threshold based on the motion information may include modifying the gesture-detection sensitivity threshold based on the motion information upon activation of the gesture-based control. In one or more implementations, the process 1300 may also include (e.g., deactivating the gesture-based control and) smoothly increasing the gesture-detection sensitivity threshold to an initial value after deactivating the gesture-based control (e.g., as described herein in connection with
In one or more implementations, obtaining the sensor data from the sensor at block 1302 includes obtaining first sensor data (e.g., sensor data 702 of
In one or more implementations, the first sensor data has a first characteristic amount of noise (e.g., relatively low noise accelerometer data) and the second sensor data has a second characteristic amount of noise (e.g., relatively higher noise EMG data) higher than the first characteristic amount of noise, and the machine learning system includes at least one processing module (e.g., a modality dropout module of the intermediate processing operations 720) interposed between the third machine learning model and the first and second machine learning model, the at least one processing module configured to emphasize (e.g., in some training runs for the gesture prediction model 724) the second sensor data having the second characteristic amount of noise higher than the first characteristic amount of noise (e.g., as discussed in connection with
In one or more implementations, the process 1300 may also include providing (e.g., by the first device or the second device), at least one of a visual indicator based on the likelihood, a haptic indicator based on the likelihood, or an auditory indicator based on the likelihood (e.g., as described herein in connection with
In the example of
At block 1404, the electronic device may obtain, based in part on providing the sensor data to a gesture control system (e.g., gesture control system 701) comprising a machine learning system (e.g., machine learning system 700) that is trained to identify one or more predicted gestures, a dynamically updating likelihood of an element control gesture being performed by a user of the device. The dynamically updating likelihood may be dynamically updated in accordance with changes in the sensor data during the period of time.
At block 1406, the electronic device may provide, for display, a dynamically updating visual indicator (e.g., visual indicator 402) of the dynamically updating likelihood of the element control gesture being performed by the user. In one or more implementations, providing the dynamically updating visual indicator for display may include providing the dynamically updating visual indicator for display at the device or at a second device (e.g., the electronic device 106) different from the device, and the process 1400 may also include displaying the dynamically updating visual indicator at the device or the second device different from the first device.
In one or more implementations, providing the dynamically updating visual indicator may include dynamically scaling an overall size of the visual indicator with the dynamically updating likelihood (e.g., a described herein in connection with
In one or more implementations, the dynamically updating visual indicator may include a plurality of distinct visual indicator components (e.g., visual indicator components 802) having a plurality of respective component sizes, and providing the dynamically updating visual indicator may also include dynamically varying the plurality of respective component sizes by an amount that scales inversely with the dynamically updating likelihood (e.g., as described in
In one or more implementations, the process 1400 may also include determining that the dynamically updating likelihood exceeds a threshold likelihood and, responsively: setting the overall size of the visual indicator to a maximum overall size (e.g., a full circumferential extent or linear transverse length as in the example of
In one or more implementations, the process 1400 may include dynamically determining a changing orientation of the element control gesture (e.g., using the gesture control system 701 as the user performs a rotation or a pan motion while holding a pinch gesture), and modifying the location of the respective subset of the plurality of distinct visual indicator components based on the changing orientation (e.g., to make the visual indicator appear to rotate with the rotation of the user's gesture, such as in the example of state 402-4 of
In one or more implementations, the process 1400 may also include effecting the gesture-based control of the element according to the changing orientation. For example, effecting the gesture-based control may include raising or lowering the volume of audio output generated by the device displaying the visual indicator or another device (e.g., a second device different from the first device), scrolling through audio or video content using the device displaying the visual indicator or another device (e.g., a second device different from the first device), raising or lowering the brightness of a light source, etc.
In one or more implementations, the visual indicator includes an indicator (e.g., indicator 904) of a current setting of the element, and effecting the gesture-based control of the element may include dynamically updating a location of the indicator by an amount that corresponds to an amount of change of the changing orientation relative to an initial orientation of the element control gesture when the dynamically updating likelihood reaches the threshold likelihood (e.g., using the relative adjustment operations described herein in connection with
In the example of
At block 1504, based on the voice input, a controller (e.g., a virtual controller) may be identified, the controller associated with an element control gesture. As examples, the controller may include a rotatable virtual control element controllable by a rotational element control gesture, a linearly moveable virtual control element controllable by a linear element control gesture, or a virtual remote control having multiple virtual control elements controllable by multiple respective element control gestures.
At block 1506, the device may modify, based on the identified controller, a gesture control system that is trained to identify one or more gestures based on sensor data from one or more sensors (e.g., as described herein in connection with
At block 1508, the device (e.g., machine learning system 700 and/or gesture detector 730) may determine a likelihood of the element control gesture being performed by a user based on an output of the modified gesture control system (e.g., as described herein in connection with
At block 1510, the process 1500 may include performing a control operation based on the likelihood of the element control gesture being performed by the user. For example, the control operation may include displaying (e.g., at the device or a second device different from the device, such as the electronic device 106) a visual indicator of the likelihood (e.g., as described herein in connection with
In the example of
At block 1604, the device may modify, based on the identified gesture, a gesture control system (e.g., gesture control system 701) that includes a machine learning system (e.g., machine learning system 700) that is trained to identify one or more gestures based on the sensor data. For example, modifying the gesture control system may include reducing a threshold for the element control gesture (e.g., a gesture-detection sensitivity threshold). As another example, modifying the gesture control system may include modifying the machine learning system. For example, modifying the machine learning system may include modifying one or more weights of one or more trained machine learning models (e.g., gesture prediction model 724) of the machine learning system of the gesture control system.
As another example, modifying the gesture control system may include modifying a weight that is applied to the likelihood by the gesture control system (e.g., by the gesture detector 730). For example, modifying the weight may include modifying the weight as described herein in connection with
In one or more implementations, prior to activating the gesture-based control, the device may apply a first amount of temporal smoothing to the determination of the likelihood and, upon activating the gesture-based control, reduce the temporal smoothing to a second amount lower than the first amount (e.g., as described herein in connection with
At block 1606, the device (e.g., machine learning system 700 and/or gesture detector 730) may determine a likelihood of an element control gesture being performed by the user based on an output of the modified gesture control system. For example, the likelihood may be determined by the gesture detector 730 as described herein in connection with
At block 1608, the process 1600 may include performing a device control operation (e.g., for controlling the device or a second device different from the device) based on the likelihood of the element control gesture being performed by the user. For example, the device control operation may include activating gesture-based control of the device or a second device different from the first device when the likelihood of the element control gesture exceeds a threshold (e.g., a gesture-detection sensitivity threshold). As another example, the device control operation may include displaying a visual indicator of the likelihood at the device or a second device different from the first device (e.g., as described herein in connection with
In the example of
At block 1704, responsive to providing the sensor data to a machine learning system (e.g., machine learning system 700), an output (e.g., a prediction, such as prediction 726) may be obtained from the machine learning system. The output may indicate one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures. In one or more implementations, the output may include multiple predicted gestures and multiple corresponding probabilities for that predicted gesture. For example, the output may include a first probability that an element control gesture (e.g., a pinch-and-hold gesture or other element control gesture) is being (or is about to be) performed by a user of the device, and a second probability that a release gesture or no gesture is being (or is about to be) performed.
At block 1706, the electronic device (e.g., gesture detector 730) may determine, based on the output of the machine learning system and a gesture-detection factor (e.g., a gesture-detection sensitivity threshold) a dynamically updating likelihood of an element control gesture being performed by a user of a first device. The dynamically updating likelihood may be dynamically updated in accordance with changes in the sensor data during the period of time.
At block 1708, the electronic device may provide, for display, a dynamically updating visual indicator (e.g., visual indicator 402) of the dynamically updating likelihood of the element control gesture being performed by the user. In one or more implementations, providing the dynamically updating visual indicator for display may include providing the dynamically updating visual indicator for display at the device or at a second device (e.g., the electronic device 106) different from the device, and the process 1700 may also include displaying the dynamically updating visual indicator at the device or the second device different from the first device.
In one or more implementations, providing the dynamically updating visual indicator may include dynamically scaling an overall size of the visual indicator with the dynamically updating likelihood (e.g., a described herein in connection with
In one or more implementations, the dynamically updating visual indicator may include a plurality of distinct visual indicator components (e.g., visual indicator components 802) having a plurality of respective component sizes, and providing the dynamically updating visual indicator may also include dynamically varying the plurality of respective component sizes by an amount that scales inversely with the dynamically updating likelihood (e.g., as described in
In one or more implementations, the process 1700 may also include performing gesture control of an element at the first device or a second device different from the first device (e.g., by increasing or decreasing an audio output volume, scrolling through content, such as audio or video content, controlling a light source, or the like).
In the example of
At block 1804, based on the voice input, a controller (e.g., a virtual controller) may be identified, the controller associated with an element control gesture. As examples, the controller may include a rotatable virtual control element controllable by a rotational element control gesture, a linearly moveable virtual control element controllable by a linear element control gesture, or a virtual remote control having multiple virtual control elements controllable by multiple respective element control gestures.
At block 1806, the device may modify, based on the identified controller, a gesture control system that is trained to identify one or more gestures based on sensor data from one or more sensors (e.g., as described herein in connection with
At block 1808, responsive to providing the sensor data to a machine learning system (e.g., machine learning system 700) of the modified gesture control system, an output (e.g., a prediction, such as prediction 726) may be obtained from the machine learning system. The output may indicate one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures. In one or more implementations, the output may include multiple predicted gestures and multiple corresponding probabilities for that predicted gesture. For example, the output may include a first probability that an element control gesture (e.g., a pinch-and-hold gesture or other element control gesture) is being (or is about to be) performed by a user of the device, and a second probability that a release gesture or no gesture is being (or is about to be) performed.
At block 1810, the electronic device may determine, using the modified gesture control system and based on the output of the machine learning system and a gesture-detection factor (e.g., a gesture-detection sensitivity threshold) a dynamically updating likelihood of an element control gesture being performed by a user of a first device. The dynamically updating likelihood may be dynamically updated in accordance with changes in the sensor data during the period of time.
At block 1812, the electronic device may provide, for display, a dynamically updating visual indicator (e.g., visual indicator 402) of the dynamically updating likelihood of the element control gesture being performed by the user. In one or more implementations, providing the dynamically updating visual indicator for display may include providing the dynamically updating visual indicator for display at the device or at a second device (e.g., the electronic device 106) different from the device, and the process 1800 may also include displaying the dynamically updating visual indicator at the device or the second device different from the first device.
In one or more implementations, providing the dynamically updating visual indicator may include dynamically scaling an overall size of the visual indicator with the dynamically updating likelihood (e.g., a described herein in connection with
In one or more implementations, the process 1800 may also include performing gesture control of an element at the first device or a second device different from the first device (e.g., by increasing or decreasing an audio output volume, scrolling through content, such as audio or video content, controlling a light source, or the like).
As described above, one aspect of the present technology is the gathering and use of data available from specific and legitimate sources for probabilistic gesture control. The present disclosure contemplates that in some instances, this gathered data may include personal information data that uniquely identifies or can be used to identify a specific person. Such personal information data can include demographic data, location-based data, sensor data, gesture data, online identifiers, telephone numbers, email addresses, home addresses, device identifiers, data or records relating to a user's health or level of fitness (e.g., vital signs measurements, medication information, exercise information, EMG signals), date of birth, or any other personal information.
The present disclosure recognizes that the use of such personal information data, in the present technology, can be used to the benefit of users. For example, the personal information data can be used for providing probabilistic gesture control. Further, other uses for personal information data that benefit the user are also contemplated by the present disclosure. For instance, health and fitness data may be used, in accordance with the user's preferences to provide insights into their general wellness, or may be used as positive feedback to individuals using technology to pursue wellness goals.
The present disclosure contemplates that those entities responsible for the collection, analysis, disclosure, transfer, storage, or other use of such personal information data will comply with well-established privacy policies and/or privacy practices. In particular, such entities would be expected to implement and consistently apply privacy practices that are generally recognized as meeting or exceeding industry or governmental requirements for maintaining the privacy of users. Such information regarding the use of personal data should be prominently and easily accessible by users, and should be updated as the collection and/or use of data changes. Personal information from users should be collected for legitimate uses only. Further, such collection/sharing should occur only after receiving the consent of the users or other legitimate basis specified in applicable law. Additionally, such entities should consider taking any needed steps for safeguarding and securing access to such personal information data and ensuring that others with access to the personal information data adhere to their privacy policies and procedures. Further, such entities can subject themselves to evaluation by third parties to certify their adherence to widely accepted privacy policies and practices. In addition, policies and practices should be adapted for the particular types of personal information data being collected and/or accessed and adapted to applicable laws and standards, including jurisdiction-specific considerations which may serve to impose a higher standard. For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA); whereas health data in other countries may be subject to other regulations and policies and should be handled accordingly.
Despite the foregoing, the present disclosure also contemplates aspects in which users selectively block the use of, or access to, personal information data. That is, the present disclosure contemplates that hardware and/or software elements can be provided to prevent or block access to such personal information data. For example, in the case of providing probabilistic gesture control, the present technology can be configured to allow users to select to “opt in” or “opt out” of participation in the collection of personal information data during registration for services or anytime thereafter. In addition to providing “opt in” and “opt out” options, the present disclosure contemplates providing notifications relating to the access or use of personal information. For instance, a user may be notified upon downloading an app that their personal information data will be accessed and then reminded again just before personal information data is accessed by the app.
Moreover, it is the intent of the present disclosure that personal information data should be managed and handled in a way to minimize risks of unintentional or unauthorized access or use. Risk can be minimized by limiting the collection of data and deleting data once it is no longer needed. In addition, and when applicable, including in certain health related applications, data de-identification can be used to protect a user's privacy. De-identification may be facilitated, when appropriate, by removing identifiers, controlling the amount or specificity of data stored (e.g., collecting location data at city level rather than at an address level), controlling how data is stored (e.g., aggregating data across users), and/or other methods such as differential privacy.
Therefore, although the present disclosure broadly covers use of personal information data to implement one or more various disclosed embodiments, the present disclosure also contemplates that the various embodiments can also be implemented without the need for accessing such personal information data. That is, the various embodiments of the present technology are not rendered inoperable due to the lack of all or a portion of such personal information data.
The bus 1908 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1900. In one or more implementations, the bus 1908 communicatively connects the one or more processing unit(s) 1912 with the ROM 1910, the system memory 1904, and the permanent storage device 1902. From these various memory units, the one or more processing unit(s) 1912 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processing unit(s) 1912 can be a single processor or a multi-core processor in different implementations.
The ROM 1910 stores static data and instructions that are needed by the one or more processing unit(s) 1912 and other modules of the electronic system 1900. The permanent storage device 1902, on the other hand, may be a read-and-write memory device. The permanent storage device 1902 may be a non-volatile memory unit that stores instructions and data even when the electronic system 1900 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 1902.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 1902. Like the permanent storage device 1902, the system memory 1904 may be a read-and-write memory device. However, unlike the permanent storage device 1902, the system memory 1904 may be a volatile read-and-write memory, such as random-access memory. The system memory 1904 may store any of the instructions and data that one or more processing unit(s) 1912 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 1904, the permanent storage device 1902, and/or the ROM 1910. From these various memory units, the one or more processing unit(s) 1912 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 1908 also connects to the input and output device interfaces 1914 and 1906. The input device interface 1914 enables a user to communicate information and select commands to the electronic system 1900. Input devices that may be used with the input device interface 1914 may include, for example, microphones, alphanumeric keyboards, touchscreens, touchpads, and pointing devices (also called “cursor control devices”). The output device interface 1906 may enable, for example, the display of images generated by electronic system 1900. Output devices that may be used with the output device interface 1906 may include, for example, speakers, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid state display, a projector, a light source, a haptic components, or any other device for outputting information. One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
In accordance with aspects of the disclosure, a method is provided that includes obtaining sensor data from a sensor; obtaining, responsive to providing the sensor data to a machine learning system, an output from the machine learning system, the output indicating one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures; determining, based on the output of the machine learning system and a gesture-detection factor, a likelihood of an element control gesture being performed by a user of a device comprising the sensor; and activating, based on the likelihood and the gesture-detection factor, gesture-based control of an element according to the element control gesture.
In accordance with aspects of the disclosure, a method is provided that includes obtaining sensor data from a sensor of a device over a period of time; obtaining, based in part on providing the sensor data to gesture control system comprising a machine learning system that is trained to identify one or more predicted gestures, a dynamically updating likelihood of an element control gesture being performed by a user of the device; and providing, for display, a dynamically updating visual indicator of the dynamically updating likelihood of the element control gesture being performed by the user.
In accordance with aspects of the disclosure, a method is provided that includes receiving a voice input to a device; identifying, based on the voice input, a controller, the controller associated with an element control gesture; modifying, based on the identified controller, a gesture control system that is trained to identify one or more gestures based on sensor data from one or more sensors; determining, with the modified gesture control system, a likelihood of the element control gesture being performed by a user; and performing a control operation based on the likelihood of the element control gesture being performed by the user.
In accordance with aspects of the disclosure, a method is provided that includes identifying, based on sensor data from a sensor, a gesture performed by a user of a device; modifying, based on the identified gesture, a gesture control system comprising a machine learning system that is trained to identify one or more gestures based on the sensor data; determining, with the modified gesture control system, a likelihood of an element control gesture being performed by the user; and performing a device control operation based on the likelihood of the element control gesture being performed by the user.
In accordance with aspects of the disclosure, a method is provided that includes obtaining sensor data from a sensor; obtaining, responsive to providing the sensor data to a machine learning system, an output from the machine learning system, the output indicating one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures; determining, based on the output of the machine learning system and a gesture-detection factor, a dynamically updating likelihood of an element control gesture being performed by a user of a first device; and providing, for display, a dynamically updating visual indicator of the dynamically updating likelihood of the element control gesture being performed by the user.
In accordance with aspects of the disclosure, a method is provided that includes receiving a voice input to a device; identifying, based on the voice input, a controller, the controller associated with an element control gesture; modifying, based on the identified controller, a gesture control system that is trained to identify one or more gestures based on sensor data from one or more sensors; obtaining, responsive to providing the sensor data to a machine learning system of the modified gesture control system, an output from the machine learning system, the output indicating one or more predicted gestures and one or more respective probabilities of the one or more predicted gestures; determining, by the modified gesture control system and based on the output of the machine learning system and a gesture-detection factor, a dynamically updating likelihood of an element control gesture being performed by a user of a first device; and providing, for display, a dynamically updating visual indicator of the dynamically updating likelihood of the element control gesture being performed by the user.
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to”, “operable to”, and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration”. Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include”, “have”, or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
This application claims the benefit of priority to U.S. Provisional Patent Application No. 63/409,638, entitled, “Probabilistic Gesture Control With Feedback For Electronic Devices,” filed on Sep. 23, 2022, the disclosure of which is hereby incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63409638 | Sep 2022 | US |