The present description relates generally to measurements of muscular force and gesture recognition.
Surface electromyography (EMG) generally involves placing several electrodes scattered around an area of the skin of a subject in order to measure electrical potential (voltage) across nerves or muscles of the subject.
Certain features of the subject technology are set forth in the appended claims. However, for purpose of explanation, several implementations of the subject technology are set forth in the following figures.
The detailed description set forth below is intended as a description of various configurations of the subject technology and is not intended to represent the only configurations in which the subject technology can be practiced. The appended drawings are incorporated herein and constitute a part of the detailed description. The detailed description includes specific details for the purpose of providing a thorough understanding of the subject technology. However, the subject technology is not limited to the specific details set forth herein and can be practiced using one or more other implementations. In one or more implementations, structures and components are shown in block diagram form in order to avoid obscuring the concepts of the subject technology.
Techniques are presented for improved muscular force estimates. The improved techniques may include single-channel or multiple-channel electromyography (EMG), where EMG measurements are taken with electrodes such as via a measurement device worn on a wrist. A resulting muscular force estimate may be used, for example, for improving hand gesture recognition and/or for producing a health metric for a user. Electrodes may provide a series of voltage measurements over time of a subject user, from which a muscular force may be estimated. In an aspect, the estimate may be based on the measurements of a differential pair of electrodes.
In some implementations, the estimate of muscular force may be based on one or more of measures derived from EMG voltage measurements. For example, the estimate of muscular force may be based on a measure of variation between adjacent voltage measurements (e.g., standard deviation of differences between adjacent voltage measurements (DASDV), or median absolute deviation (MAD)). In a second example, the estimate of muscular force may be based on estimated spectral properties of the voltage measurements, such as a spectral moment. In a third example, the muscular force estimate may be based on a combination of measures of variation, spectral properties, and/or other measurements such as fractal dimension metrics or derivation-based metrics, which will collectively be referred to as “stability” metrics in this application.
In other implementations, the estimate of muscular force may be based on an estimated mean frequency of the voltage measurements, such as a first-order spectral moment calculated from the voltage measurements. In some aspects, an estimate of muscular force for a user may be adjusted based on calibration information derived from a calibration process with that particular user.
An estimate of muscular force may be used to improve gesture recognition. In an aspect, an EMG device may be attached to a subject user's wrist for generating voltage measurements related to muscular forces of the user's hand. In another aspect, a separate sensor for recognizing gestures of the user's hand, such as a camera for capturing images of the hand, may detect gestures of the hand. In one aspect for improved gesture recognition, a muscular force estimate from an EMG device may be used to adjust a preliminary confidence estimate of a detected gesture.
While
In an aspect, gesture sensor 106 may be incorporated as part of headset worn by the subject user, or may be incorporated in a tablet, cell phone or other device positioned in proximity of the subject user and the user's gesturing body part (such as hand 104). Gesture sensor 106 may include a camera capable of capturing video or still images of visible light, infrared light, radar or sonar signals reflecting off the gesturing body part. In addition to or instead of a camera, gesture sensor 106 may include a motion sensor such as an accelerometer attached or coupled to the gesturing body part and may include one or more or other types of sensors for capturing data indicative of a gesture by a body part.
In an aspect, muscular force estimator 220 may include an estimator of signal variation 222 and may include an estimator of stability 224. In an aspect, the muscular force estimator may estimate a force based on a combination of variation metrics of the voltage measurements and stability of the voltage measurements. Additional details regarding estimation of muscular force are provided below regarding
In an aspect, system 200 may use an estimate of muscular force to improve a recognition of gestures by a body part such as hand 104 (
Additional optional aspects of system 200 may include gesture sensor(s) 230 and gesture detector 240. In an example based on
In other aspects not depicted, muscular force estimator 220 may not be embodied in the same device as electrode sensor 210. For example, muscular force estimator may be incorporated in a device that also includes gesture sensors 106/230. Alternately, the muscular force estimator 220 may be included in a device that also include gesture detector 240, such as a cloud computer or cell phone that is paired with sensors 210, 230. One of skill in the art will understand that various other configurations are possible.
A variation metric of the voltage measurements may be determined (box 308), for example, as a difference absolute standard deviation value (DASDV), which may be a standard deviation value of the difference between adjacent samples, such as:
where N is an integer window size of the voltage measurement samples x, and xi refers to the ith sample within the window. In another aspect, a variation metric may be determined as a median absolute deviation (MAD), which may be the median absolute difference between adjacent samples and their median or mean voltage, such as:
MAD=median(abs(xi−median(x)))
(Eq. 2), where xi refers to the ith voltage measurement within a window of length N.
The determined variation (box 308) may be smoothed (box 310) and/or normalized (box 312) before being used to compute the force estimate (box 320). Smoothing of variation may be performed, for example, with a non-zero window size (box 310), and normalization (box 312) may be to a range from zero to 1.
In another aspect, the determined variation may be combined with a determined metric of stability in the series of voltage measurements. For example, a fractal dimension estimate (e.g., as computed with a method proposed by M. J. Katz) may indicate how detail in a pattern in the series of voltage measurements changes with the scale at which the pattern is measured:
where the estimated fractal dimension is based on a set of sequential voltage measurement samples using a sum (L) and average (a) of the Euclidean distances between successive samples in the set, and using a maximum distance (d) between a first sample and all other samples in the set.
In an aspect, muscular force may be computed (box 320) by combining smoothed (boxes 310, 316) and/or normalized (boxes 312, 318) versions of the variation, spectral properties, and/or stability metric. Furthermore, the computed muscular force (box 320) may be further smoothed (box 322), such as with a non-zero length window.
In an aspect, smoothing, such as in optional boxes 310, 316, 322, may include techniques to remove noise, slow a rate of change, reduce high frequencies, or average over multiple neighboring samples. For example, smoothing operations may process a predetermined number of input samples to determine a single output sample, where a “window size” for the smoothing is the predetermined number. In as aspect, smoothing operations may differ between boxes 310, 316, and 322, and a corresponding window size for each may differ.
In aspects, a variety of normalization functions may be used. For example, a fixed normalization may be done using a fixed minimum and maximum, where the fixed minimum and fixed maximum are determined experimentally by a user. In other examples, normalization may be based a minimum and maximum over a window of sampled voltage measurement, where minimum and maximum are, for example, mean-based, median-based, or range-based. A mean-based normalization may have: minimum=mean−standard deviation*a factor; and maximum=mean+standard deviation*a factor. A median-based normalization may have: minimum=median−MAD*a factor; and maximum=median+MAD*a factor, where MAD is a median absolute deviation, as described above.
In an optional aspect of process 300, and a preliminary confidence of a gesture detection may be modified (box 326) based on an estimated muscular force to produce a likelihood of a detecting a gesture. A preliminary confidence of gesture detection may be, for example, an estimated probability that the subject user intended a particular gesture. See discussion below regarding gesture detector 430 (
In one or more implementations, one or more of the sensor data 402, the sensor data 404, and the sensor data 406 may have characteristics (e.g., noise characteristics) that significantly differ from the characteristics of others of the sensor data 402, the sensor data 404, and the sensor data 406. For example, EMG data (e.g., sensor data 406) is susceptible to various sources of noise arising from nearby electrical devices, or bad skin-to-electrode contact. Therefore, EMG can be significantly noisier than accelerometer data (e.g., sensor data 402) or gyroscope data (e.g., sensor data 404). This can be problematic for training a machine learning model to detect a gesture based on these multiple different types of data with differing characteristics.
The system of
For example, the machine learning model 408 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 402, the machine learning model 410 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 404, and the machine learning model 412 may be a feature extractor trained to extract features of sensor data of the same type as sensor data 406. As shown, machine learning model 408 may output a feature vector 414 containing features extracted from sensor data 402, machine learning model 410 may output a feature vector 416 containing features extracted from sensor data 404, and machine learning model 408 may output a feature vector 418 containing features extracted from sensor data 406. In this example, three types of sensor data are provided to three feature extractors, however, more or less than three types of sensor data may be used in conjunction with more or less than three corresponding feature extractors in other implementations.
As shown in
In order to generate the combined input vector 422 for the gesture prediction model 424, the intermediate processing operations 420 may perform modality dropout operations, average pooling operations, modality fusion operations and/or other intermediate processing operations. For example, the modality dropout operations may periodically and temporarily replace one, some, or all of the feature vector 414, the feature vector 416, or the feature vector 418 with replacement data (e.g., zeros) while leaving the others of the feature vector 414, the feature vector 416, or the feature vector 418 unchanged. In this way, the modality dropout operations can prevent the gesture prediction model from learning to ignore sensor data from one or more of the sensors (e.g., by learning to ignore, for example, high noise data when other sensor data is low noise data). Modality dropout operations can be performed during training of the gesture prediction model 424, and/or during prediction operations with the gesture prediction model 424. In one or more implementations, the modality dropout operations can improve the ability of the machine learning system 400 to generate reliable and accurate gesture predictions using multi-mode sensor data. In one or more implementations, the average pooling operations may include determining one or more averages (or other mathematical combinations, such as medians) for one or more portions of the feature vector 414, the feature vector 416, and/or the feature vector 418 (e.g., to downsample one or more of the feature vector 414, the feature vector 416, and/or the feature vector 418 to a common size with the others of the feature vector 414, the feature vector 416, and/or the feature vector 418, for combination by the modality fusion operations). In one or more implementations, the modality fusion operations may include combining (e.g., concatenating) the features vectors processed by the modality dropout operations and the average pooling operations to form the combined input vector 422.
The gesture prediction model 424 may be a machine learning model that has been trained to predict a gesture that is about to be performed or that is being performed by a user, based on a combined input vector 422 that is derived from multi-modal sensor data. In one or more implementations, the machine learning system 400 of the gesture control system 401 (e.g., including the machine learning model 408, the machine learning model 410, the machine learning model 412, and the gesture prediction model 424) may be trained on sensor data obtained by the device in which the machine learning system 400 is implemented and from the user of that device, and/or sensor data obtained from multiple (e.g., hundreds, thousands, millions) of devices from multiple (e.g., hundreds, thousands, millions) of anonymized users, obtained with the explicit permission of the users. In one or more implementations, the gesture prediction model 424 may output a prediction 426. In one or more implementations, the prediction 426 may include one or more predicted gestures (e.g., of one or multiple gestures that the model has been trained to detect), and may also output a probability that the predicted gesture has been detected. In one or more implementations, the gesture prediction model may output multiple predicted gestures with multiple corresponding probabilities. In one or more implementations, the machine learning system 400 can generate a new prediction 426 based on new sensor data periodically (e.g., once per second, ten times per second, hundreds of times per second, once per millisecond, or with any other suitable periodic rate).
As shown in
In an aspect, outputs of gesture detector 430 may be further based on an estimate of muscular force such as described above regarding
For example, the gesture detector 430 may periodically generate a dynamically updating likelihood of an element control gesture (e.g., a pinch-and-hold gesture), such as by generating a likelihood for each prediction 426 or for aggregated sets of predictions 426 (e.g., in implementations in which temporal smoothing is applied). For example, when an element control gesture is the highest probability gesture from the gesture prediction model 424, the gesture detector 430 may increase the likelihood of the element control gesture based on the probability of that gesture from the gesture prediction model 424 and based on the gesture detection factor. For example, the gesture detection factor may be a gesture-detection sensitivity threshold. In one or more implementations, the gesture-detection sensitivity threshold may be a user-controllable threshold that the user can change to set the sensitivity of activating gesture control to the user's desired level. In one or more implementations, the gesture detector 430 may increase the likelihood of the element control gesture based on the probability of that gesture from the gesture prediction model 424, and based on the gesture detection factor by increasing the likelihood by an amount corresponding to a higher of the probability of the element control gesture and a fraction (e.g., half) of the gesture-detection sensitivity threshold.
In a use case in which the element control gesture is not the gesture with the highest probability from the gesture prediction model 424 (e.g., the gesture prediction model 424 has output the element control gesture with a probability that is lower than the probability of another gesture predicted in the output of the gesture prediction model 424), the gesture detector 430 may decrease the likelihood of the element control gesture by an amount corresponding the probability of whichever gesture has the highest probability from the gesture prediction model 424 and a fraction (e.g., half) of the gesture-detection sensitivity threshold. In this way, the likelihood can be dynamically updated up or down based on the output of the gesture prediction model 424 and the gesture detection factor (e.g., the gesture-detection sensitivity threshold).
As each instance of this dynamically updating likelihood is generated, the likelihood (e.g., or an aggregated likelihood based on several recent instances of the dynamically updating likelihood, in implementations in which temporal smoothing is used) may be compared to the gesture-detection sensitivity threshold. When the likelihood is greater than or equal to the gesture-detection sensitivity threshold, the gesture detector 430 may determine that the gesture has been detected and may provide an indication of the detected element control gesture to a control system 432. When the likelihood is less than the gesture-detection sensitivity threshold, the gesture detector 430 may determine that the gesture has not been detected and may not provide an indication of the detected element control gesture to a control system 432. In one or more implementations, providing the indication of the detected element control gesture may activate gesture-based control of an element at an electronic device (e.g., the wrist sensor 102 (
Throughout the dynamic updating of the likelihood by the gesture detector 430, the dynamically updating likelihood may be provided to a display controller. For example, the display controller (e.g., an application-level or system-level process with the capability of controlling display content for display operating at the wrist sensor 102 (
In various implementations, the control system 432 and/or the display controller may be implemented as, or as part of, a system-level process at an electronic device or as, or as part of an application (e.g., a media player application that controls playback of audio and/or video content, or a connected home application that controls smart appliances, light sources, or the like). In various implementations, the display controller may be implemented at the electronic device with the gesture prediction model 424 and the gesture detector 430 or may be implemented at a different device. In one or more implementations, the control system 432 and the display controller may be implemented separately or as part of a common system or application process.
Once the element control gesture is detected and the gesture-based control is activated, gesture control system 401 of
System 500 includes an electrode sensor 510 and a muscular force estimator 520. In some implementations, some elements of system 500, such as any elements 520-540, may be implemented on a processor, such as processor 814 (
In an aspect, muscular force estimator 520 may include spectral moment estimator 524. Spectral moment estimator may estimate a spectral moment of a series of voltage measurements from electrode sensor 510. A spectral moment may characterize a frequency spectrum of a series of measurements, and a first-order spectral moment may estimate a mean value of the frequency spectrum. Spectral moment estimator may determine a frequency spectrum of a series of measurements. Frequency transform 523 may transform a time-domain series of measurements, such as from the electrode sensor, into a frequency-domain representation. Frequency transform 523 may include, for example, a Fourier transform (such with a discrete Fourier transform (DFT), fast Fourier transform (FFT), or a discrete cosine transform (DCT)). In an aspect, the frequency-domain representation may include complex numbers each having a real and imaginary component.
In some implementations, a spectral moment may be computed as:
where:
N is the length of the signal,
k is the frequency index,
ref is the real component of the frequency-domain representation of the frequency at index k, and
imf is the imaginary component of frequency-domain representation of the frequency at index k.
In implementations, a series of electrode measurements from electrode sensor 510 may be filtered by noise filter 522 before calculating a muscular force. For example, noise filter 522 may include a high-pass filter for eliminating low frequency noise, and/or noise filter 522 may include a notch filter, for example to filter noise occurring around a particular notch frequency such as 60 Hz. Noise filter may be applied to a series of measurements prior to estimating a spectral moment, such as with spectral moment estimator 524.
In some implementations, an estimate of muscular force, such as from spectral moment estimator 524, may be adjusted by force adjuster 525 based on calibration information. For example, calibration information may indicate a correlation between an experimentally measured muscular force and an estimated spectral moment, and the calibration information may be used to “zero” adjust the muscular force estimate by shifting and/or scaling an estimated spectral moment to determine an estimated muscular force. In an aspect, calibration information may be determined based on a calibration process for electrode sensor 510 with a particular user. For example, a grip strength measuring device such as dynamometer may be held by the particular user in a hand that is also wearing the electrode sensor 510, and measurements during a calibration process may correlate dynamometer strength measurements with estimates of a spectral moment of electrode sensor measurements.
In an implementation a motion/rotation detector 530 may measure motion and/or rotation of electrode sensor 510, which may be used to disqualify muscular force estimates. For example, when motion or rotation of electrode sensor 510 is above respective thresholds, a muscular force estimate may be disqualified, or provided with an indication of low confidence. Large or fast motions or rotations of electrode sensor 510 may indicate movements of an arm electrode sensor 510, and the estimated muscular force may be unreliable at that time. For example, when an arm is moving, an estimated muscular force may in-part indicate forces of muscles used to move the arm and may not represent only force of muscles used for hand grip strength. In another aspect, an estimated muscular force may be disqualified whenever it is below a muscular force threshold.
Some health metrics may be based on estimates of muscular force. For example, a hand grip force estimate of a user from muscular force estimator 520 may be used by health metric estimator 540 to determine a health metric for the user. For example, a low grip strength or a fast drop in grip strength may be indicative of health problems.
Process 600 may be implemented, for example, with system 500 (
In one or more implementations, the system 200 and/or device 800 may include various sensors at various locations for determining proximity to one or more devices for gesture control, for determining relative or absolute locations of the device(s) for gesture control, and/or for detecting user gestures (e.g., by providing sensor data from the sensor(s) to machine learning a machine learning system).
In the example of
As shown in
Housing 702 and band 704 may be attached together at interface 708. Interface 708 may be a purely mechanical interface or may include an electrical connector interface between circuitry within band 704 and circuitry 706 within housing 702 in various implementations. Processing circuitry such as the processor 814 of circuitry 706 may be communicatively coupled to one or more of sensors that are mounted in the housing 702 and/or one or more of sensors that are mounted in the band 704 (e.g., via interface 708).
In the example of
In one or more implementations, one or more of the sensors 210, 230 may be mounted on or to the sidewall 710 of housing 702. In the example of
Although various examples, including the example of
Although not visible in
The bus 810 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computing device 800. In one or more implementations, the bus 810 communicatively connects the one or more processors 814 with the ROM 812, the system memory 804, and the permanent storage device 802. From these various memory units, the one or more processors 814 retrieves instructions to execute and data to process in order to execute the processes of the subject disclosure. The one or more processors 814 can be a single processor or a multi-core processor in different implementations.
The ROM 812 stores static data and instructions that are needed by the one or more processors 814 and other modules of the computing device 800. The permanent storage device 802, on the other hand, may be a read-and-write memory device. The permanent storage device 802 may be a non-volatile memory unit that stores instructions and data even when the computing device 800 is off. In one or more implementations, a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) may be used as the permanent storage device 802.
In one or more implementations, a removable storage device (such as a floppy disk, flash drive, and its corresponding disk drive) may be used as the permanent storage device 802. Like the permanent storage device 802, the system memory 804 may be a read-and-write memory device. However, unlike the permanent storage device 802, the system memory 804 may be a volatile read-and-write memory, such as random-access memory. The system memory 804 may store any of the instructions and data that one or more processors 814 may need at runtime. In one or more implementations, the processes of the subject disclosure are stored in the system memory 804, the permanent storage device 802, and/or the ROM 812. From these various memory units, the one or more processors 814 retrieves instructions to execute and data to process in order to execute the processes of one or more implementations.
The bus 810 also connects to the input and output device interfaces 806 and 808. The input device interface 806 enables a user to communicate information and select commands to the computing device 800. Input devices that may be used with the input device interface 806 may include, for example, alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output device interface 808 may enable, for example, the display of images generated by computing device 800. Output devices that may be used with the output device interface 808 may include, for example, printers and display devices, such as a liquid crystal display (LCD), a light emitting diode (LED) display, an organic light emitting diode (OLED) display, a flexible display, a flat panel display, a solid-state display, a projector, or any other device for outputting information.
One or more implementations may include devices that function as both input and output devices, such as a touchscreen. In these implementations, feedback provided to the user can be any form of sensory feedback, such as visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Finally, as shown in
In one or more implementations, the system memory 804 may store one or more feature extraction models, one or more gesture prediction models, one or more gesture detectors, one or more (e.g., virtual) controllers (e.g., sets of gestures and corresponding actions to be performed by the device 800 or another electronic devices when specific gestures are detected), voice assistant applications, and/or other information (e.g., locations, identifiers, location information, etc.) associated with one or more other devices, using data stored locally in system memory 804. Moreover, the input device 806 may include suitable logic, circuitry, and/or code for capturing input, such as audio input, remote control input, touchscreen input, keyboard input, etc. The output device interface 808 may include suitable logic, circuitry, and/or code for generating output, such as audio output, display output, light output, and/or haptic and/or other tactile output (e.g., vibrations, taps, etc.).
The sensors included in or connected to input device interface 806 may include one or more ultra-wide band (UWB) sensors, one or more inertial measurement unit (IMU) sensors (e.g., one or more accelerometers, one or more gyroscopes, one or more compasses and/or magnetometers, etc.), one or more image sensors (e.g., coupled with and/or including an computer-vision engine), one or more electromyography (EMG) sensors, optical sensors, light sensors, image sensors, pressure sensors, strain gauges, lidar sensors, proximity sensors, ultrasound sensors, radio-frequency (RF) sensors, platinum optical intensity sensors, and/or other sensors for sensing aspects of the environment around and/or in contact with the device 800 (e.g., including objects, devices, and/or user movements and/or gestures in the environment). The sensors may also include motion sensors, such as inertial measurement unit (IMU) sensors (e.g., one or more accelerometers, one or more gyroscopes, and/or one or more magnetometers) that sense the motion of the device 800 itself.
In one or more implementations, system memory 804 may store a machine learning system that includes one or more machine learning models that may receive, as inputs, outputs from one or more of sensor(s) (e.g. sensors 210, 230 which may be connected to input device interface 806). The machine learning models may have been trained based on outputs from various sensors corresponding to the sensors(s), in order to detect and/or predict a user gesture. When the device 800 detects a user gesture using the sensor(s) and the machine learning models, the device 800 may perform a particular action (e.g., raising or lowering a volume of audio output being generated by the device 800, scrolling through video or audio content at the device 800, other actions at the device 800, and/or generating a control signal corresponding to a selected device and/or a selected gesture-control element for the selected device, and transmitting the control signal to the selected device). In one or more implementations, the machine learning models may be trained based on a local sensor data from the sensor(s) at the device 800, and/or based on a general population of devices and/or users. In this manner, the machine learning models can be re-used across multiple different users even without a priori knowledge of any particular characteristics of the individual users in one or more implementations. In one or more implementations, a model trained on a general population of users can later be tuned or personalized for a specific user of a device such as the device 800.
Implementations within the scope of the present disclosure can be partially or entirely realized using a tangible computer-readable storage medium (or multiple tangible computer-readable storage media of one or more types) encoding one or more instructions. The tangible computer-readable storage medium also can be non-transitory in nature.
The computer-readable storage medium can be any storage medium that can be read, written, or otherwise accessed by a general purpose or special purpose computing device, including any processing electronics and/or processing circuitry capable of executing instructions. For example, without limitation, the computer-readable medium can include any volatile semiconductor memory, such as RAM, DRAM, SRAM, T-RAM, Z-RAM, and TTRAM. The computer-readable medium also can include any non-volatile semiconductor memory, such as ROM, PROM, EPROM, EEPROM, NVRAM, flash, nvSRAM, FeRAM, FeTRAM, MRAM, PRAM, CBRAM, SONOS, RRAM, NRAM, racetrack memory, FJG, and Millipede memory.
Further, the computer-readable storage medium can include any non-semiconductor memory, such as optical disk storage, magnetic disk storage, magnetic tape, other magnetic storage devices, or any other medium capable of storing one or more instructions. In one or more implementations, the tangible computer-readable storage medium can be directly coupled to a computing device, while in other implementations, the tangible computer-readable storage medium can be indirectly coupled to a computing device, e.g., via one or more wired connections, one or more wireless connections, or any combination thereof.
Instructions can be directly executable or can be used to develop executable instructions. For example, instructions can be realized as executable or non-executable machine code or as instructions in a high-level language that can be compiled to produce executable or non-executable machine code. Further, instructions also can be realized as or can include data. Computer-executable instructions also can be organized in any format, including routines, subroutines, programs, data structures, objects, modules, applications, applets, functions, etc. As recognized by those of skill in the art, details including, but not limited to, the number, structure, sequence, and organization of instructions can vary significantly without varying the underlying logic, function, processing, and output.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, one or more implementations are performed by one or more integrated circuits, such as ASICs or FPGAs. In one or more implementations, such integrated circuits execute instructions that are stored on the circuit itself.
Those of skill in the art would appreciate that the various illustrative blocks, modules, elements, components, methods, and algorithms described herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate this interchangeability of hardware and software, various illustrative blocks, modules, elements, components, methods, and algorithms have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application. Various components and blocks may be arranged differently (e.g., arranged in a different order, or partitioned in a different way) all without departing from the scope of the subject technology.
It is understood that any specific order or hierarchy of blocks in the processes disclosed is an illustration of example approaches. Based upon design preferences, it is understood that the specific order or hierarchy of blocks in the processes may be rearranged, or that all illustrated blocks be performed. Any of the blocks may be performed simultaneously. In one or more implementations, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the implementations described above should not be understood as requiring such separation in all implementations, and it should be understood that the described program components (e.g., computer program products) and systems can generally be integrated together in a single software product or packaged into multiple software products.
As used in this specification and any claims of this application, the terms “base station”, “receiver”, “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” means displaying on an electronic device.
As used herein, the phrase “at least one of” preceding a series of items, with the term “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (i.e., each item). The phrase “at least one of” does not require selection of at least one of each item listed; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The predicate words “configured to,” “operable to,” and “programmed to” do not imply any particular tangible or intangible modification of a subject, but, rather, are intended to be used interchangeably. In one or more implementations, a processor configured to monitor and control an operation or a component may also mean the processor being programmed to monitor and control the operation or the processor being operable to monitor and control the operation. Likewise, a processor configured to execute code can be construed as a processor programmed to execute code or operable to execute code.
Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some implementations, one or more implementations, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” or as an “example” is not necessarily to be construed as preferred or advantageous over other implementations. Furthermore, to the extent that the term “include,” “have,” or the like is used in the description or the claims, such term is intended to be inclusive in a manner similar to the term “comprise” as “comprise” is interpreted when employed as a transitional word in a claim.
All structural and functional equivalents to the elements of the various aspects described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the claims. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the claims. No claim element is to be construed under the provisions of 35 U.S.C. § 112(f) unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for.”
The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. Thus, the claims are not intended to be limited to the aspects shown herein, but are to be accorded the full scope consistent with the language claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more”. Unless specifically stated otherwise, the term “some” refers to one or more. Pronouns in the masculine (e.g., his) include the feminine and neuter gender (e.g., her and its) and vice versa. Headings and subheadings, if any, are used for convenience only and do not limit the subject disclosure.
The present application claims the benefit of U.S. Provisional application, Ser. No. 63/408,467 filed Sep. 20, 2022, entitled “FORCE ESTIMATION FROM WRIST ELECTROMYOGRAPHY.” The aforementioned application is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63408467 | Sep 2022 | US |