Wearable devices and methods for improved speech recognition

Information

  • Patent Grant
  • 11036302
  • Patent Number
    11,036,302
  • Date Filed
    Monday, February 10, 2020
    5 years ago
  • Date Issued
    Tuesday, June 15, 2021
    3 years ago
Abstract
Systems and methods for using neuromuscular information to improve speech recognition. The system includes a plurality of neuromuscular sensors, arranged on one or more wearable devices, wherein the plurality of neuromuscular sensors is configured to continuously record a plurality of neuromuscular signals from a user, at least one storage device configured to store one or more trained statistical models, and at least one computer processor programmed to provide as an input to the one or more trained statistical models, the plurality of neuromuscular signals or signals derived from the plurality of neuromuscular signals, determine based, at least in part, on an output of the one or more trained statistical models, at least one instruction for modifying an operation of a speech recognizer, and provide the at least one instruction to the speech recognizer.
Description
BACKGROUND

Automated speech recognition systems transform recorded audio including speech into recognized text. The speech recognition systems convert the input audio into text using one or more acoustic or language models that represent the mapping from audio input to text output using language-based constructs such as phonemes, syllables, or words. The models used for speech recognition may be speaker independent or speaker dependent and may be trained or refined for use by a particular user as the user uses the system and feedback is provided to retrain the models. Increased usage of the system by the particular user typically results in improvements to the accuracy and/or speed by which the system is able to produce speech recognition results as the system learns the user's speech characteristics and style.


SUMMARY

Systems and methods are described herein for providing an improved speech recognition system in which speech data provided as input to the system is augmented with neuromuscular signals (e.g., recorded using electromyography (EMG)). The improved speech recognition system may exhibit better performance (e.g., accuracy, speed) compared to speech recognition systems that receive only speech data as input. For example, a musculo-skeletal representation (including, but not limited to, body position information and biophysical quantities such as motor unit and muscle activation levels and forces) determined based on the neuromuscular signals may encode contextual information represented in a user's movements or activation of their muscles, that may be used to enhance speech recognition performance. In another example, the described systems and methods may interpret parts of speech from the user's movements or activations to enhance speech recognition performance. In some embodiments, the described systems and methods provide for modifying an operation of a speech recognition system (e.g., by enabling and disabling speech recognition with a wake word/phrase or gesture, applying formatting such as bold, italics, underline, indent, etc., entering punctuation, and other suitable modifications). In some embodiments, the described systems and methods provide for using recognized neuromuscular information, e.g., for one or more gestures, to change an interaction mode (e.g., dictation, spelling, editing, navigation, or another suitable mode) with the speech recognition system or speech recognizer. In some embodiments, the described systems and methods provide for using EMG-based approaches (e.g. EMG-based scrolling and clicking) to select text for editing, error corrections, copying, pasting, or another suitable purpose. In some embodiments, the described systems and methods provide for selection of options from list of choices, e.g., with audio feedback for “eyes-busy” situations like driving (“did you mean X or Y?”). In some embodiments, the described systems and methods provide for a hybrid neuromuscular/speech input that gracefully switches from one mode to the other, and uses both modes when available to increase accuracy and speed. In some embodiments, the described systems and methods provide for text input using a linguistic token, such as phonemes, characters, syllables, words, sentences, or another suitable linguistic token, as the basic unit of recognition.


Some embodiments are directed to a system for using neuromuscular information to improve speech recognition. The system includes a plurality of neuromuscular sensors arranged on one or more wearable devices. The plurality of neuromuscular sensors is configured to continuously record a plurality of neuromuscular signals from a user. The system further includes at least one storage device configured to store one or more trained statistical models and at least one computer processor. The computer processor is programmed to provide as an input to the one or more trained statistical models. The plurality of neuromuscular signals or signals are derived from the plurality of neuromuscular signals. The computer processor is further programmed to determine based, at least in part, on an output of the one or more trained statistical models, at least one instruction for modifying an operation of a speech recognizer and provide the at least one instruction to the speech recognizer. In some embodiments, the instruction for modifying the operation of the speech recognizer is determined directly from the plurality of neuromuscular signals. For example, the instruction may be output from a trained statistical model after applying the plurality of neuromuscular signals as inputs to the trained statistical model. In some embodiments, a musculo-skeletal representation of the user is determined based on the output of the one or more trained statistical models, and the instruction for modifying the operation of the speech recognizer is determined based on the musculo-skeletal representation.


Some embodiments are directed to a system for using neuromuscular information to improve speech recognition. The system includes a plurality of neuromuscular sensors arranged on one or more wearable devices. The plurality of neuromuscular sensors is configured to continuously record a plurality of neuromuscular signals from a user. The system further includes at least one storage device configured to store one or more trained statistical models, at least one input interface configured to receive the audio input, and at least one computer processor. The computer processor is programmed to obtain the audio input from the input interface and obtain the plurality of neuromuscular signals from the plurality of neuromuscular sensors. The computer processor is further programmed to provide as input to the one or more trained statistical models, the audio input and/or the plurality of neuromuscular signals or signals derived from the plurality of neuromuscular signals. The computer processor is further programmed to determine the text based, at least in part, on an output of the one or more trained statistical models.


Some embodiments are directed to a system for text input based on neuromuscular information. The system includes a plurality of neuromuscular sensors arranged on one or more wearable devices. The plurality of neuromuscular sensors is configured to continuously record a plurality of neuromuscular signals from a user. The system further includes at least one storage device configured to store one or more trained statistical models and at least one computer processor. The computer processor is programmed to obtain the plurality of neuromuscular signals from the plurality of neuromuscular sensors and provide the plurality of neuromuscular signals, or signals derived from the plurality of neuromuscular signals, as input to the one or more trained statistical models. The computer processor is further programmed to determine one or more linguistic tokens based, at least in part, on an output of the one or more trained statistical models.


It should be appreciated that all combinations of the foregoing concepts and additional concepts discussed in greater detail below (provided such concepts are not mutually inconsistent) are contemplated as being part of the inventive subject matter disclosed herein. In particular, all combinations of claimed subject matter appearing at the end of this disclosure are contemplated as being part of the inventive subject matter disclosed herein.





BRIEF DESCRIPTION OF DRAWINGS

Various non-limiting embodiments of the technology will be described with reference to the following figures. It should be appreciated that the figures are not necessarily drawn to scale.



FIG. 1 is a schematic diagram of a computer-based system for using neuromuscular information to improve speech recognition in accordance with some embodiments of the technology described herein;



FIG. 2 is a flowchart of an illustrative process for using neuromuscular information to improve speech recognition, in accordance with some embodiments of the technology described herein;



FIG. 3 is a flowchart of another illustrative process for using neuromuscular information to improve speech recognition, in accordance with some embodiments of the technology described herein;



FIG. 4 is a flowchart of yet another illustrative process for using neuromuscular information to improve speech recognition, in accordance with some embodiments of the technology described herein;



FIG. 5 is a flowchart of an illustrative process for using neuromuscular information to improve speech recognition in accordance with some embodiments of the technology described herein;



FIG. 6 illustrates a wristband having EMG sensors arranged circumferentially thereon, in accordance with some embodiments of the technology described herein; and



FIG. 7 illustrates a user wearing the wristband of FIG. 6 while typing on a keyboard, in accordance with some embodiments of the technology described herein.





DETAILED DESCRIPTION

Automated speech recognition (ASR) is a computer-implemented process for converting speech to text using mappings between acoustic features extracted from input speech and language-based representations such as phonemes. Some ASR systems take as input, information other than speech to improve the performance of the ASR system. For example, an ASR system may take as input both visual information (e.g., images of a user's face) and audio information (e.g., speech) and may determine a speech recognition result based one or both of the types of inputs.


The inventors have recognized and appreciated that existing techniques for performing speech recognition may be improved by using musculo-skeletal information about the position and/or movement of a user's body (including, but not limited to, the user's arm, wrist, hand, neck, throat, tongue, or face) derived from recorded neuromuscular signals to augment the analysis of received audio when performing speech recognition.


The human musculo-skeletal system can be modeled as a multi-segment articulated rigid body system, with joints forming the interfaces between the different segments and joint angles defining the spatial relationships between connected segments in the model. Constraints on the movement at the joints are governed by the type of joint connecting the segments and the biological structures (e.g., muscles, tendons, ligaments) that restrict the range of movement at the joint. For example, the shoulder joint connecting the upper arm to the torso and the hip joint connecting the upper leg to the torso are ball and socket joints that permit extension and flexion movements as well as rotational movements. By contrast, the elbow joint connecting the upper arm and the forearm and the knee joint connecting the upper leg and the lower leg allow for a more limited range of motion. As described herein, a multi-segment articulated rigid body system is used to model the human musculo-skeletal system. However, it should be appreciated that some segments of the human musculo-skeletal system (e.g., the forearm), though approximated as a rigid body in the articulated rigid body system, may include multiple rigid structures (e.g., the ulna and radius bones of the forearm) that provide for more complex movement within the segment that is not explicitly considered by the rigid body model. Accordingly, a model of an articulated rigid body system for use with some embodiments of the technology described herein may include segments that represent a combination of body parts that are not strictly rigid bodies.


In kinematics, rigid bodies are objects that exhibit various attributes of motion (e.g., position, orientation, angular velocity, acceleration). Knowing the motion attributes of one segment of the rigid body enables the motion attributes for other segments of the rigid body to be determined based on constraints in how the segments are connected. For example, the arm may be modeled as a two-segment articulated rigid body with an upper portion corresponding to the upper arm connected at a shoulder joint to the torso of the body and a lower portion corresponding to the forearm, wherein the two segments are connected at the elbow joint. As another example, the hand may be modeled as a multi-segment articulated body with the joints in the wrist and each finger forming the interfaces between the multiple segments in the model. In some embodiments, movements of the segments in the rigid body model can be simulated as an articulated rigid body system in which orientation and position information of a segment relative to other segments in the model are predicted using a trained statistical model, as described in more detail below.



FIG. 1 illustrates a system 100 in accordance with some embodiments. The system includes a plurality of autonomous sensors 110 configured to record signals resulting from the movement of portions of a human body (including, but not limited to, the user's arm, wrist, hand, neck, throat, tongue, or face). As used herein, the term “autonomous sensors” refers to sensors configured to measure the movement of body segments without requiring the use of external sensors, examples of which include, but are not limited to, cameras or global positioning systems. Autonomous sensors 110 may include one or more Inertial Measurement Units (IMUs), which measure a combination of physical aspects of motion, using, for example, an accelerometer and a gyroscope. In some embodiments, IMUs may be used to sense information about the movement of the part of the body on which the IMU is attached and information derived from the sensed data (e.g., position and/or orientation information) may be tracked as the user moves over time. For example, one or more IMUs may be used to track movements of portions of a user's body proximal to the user's torso (e.g., arms, legs) as the user moves over time.


Autonomous sensors 110 may also include a plurality of neuromuscular sensors configured to record signals arising from neuromuscular activity in skeletal muscle of a human body. The term “neuromuscular activity” as used herein refers to neural activation of spinal motor neurons that innervate a muscle, muscle activation, muscle contraction, or any combination of the neural activation, muscle activation, and muscle contraction. Neuromuscular sensors may include one or more electromyography (EMG) sensors, one or more mechanomyography (MMG) sensors, one or more sonomyography (SMG) sensors, and/or one or more sensors of any suitable type that are configured to detect neuromuscular signals. In some embodiments, the plurality of neuromuscular sensors may be used to sense muscular activity related to a movement of the part of the body controlled by muscles from which the neuromuscular sensors are arranged to sense the muscle activity. Spatial information (e.g., position and/or orientation information) describing the movement (e.g., for portions of the user's body distal to the user's torso, such as hands and feet) may be predicted based on the sensed neuromuscular signals as the user moves over time.


In embodiments that include at least one IMU and a plurality of neuromuscular sensors, the IMU(s) and neuromuscular sensors may be arranged to detect movement or activation of different parts of the human body (including, but not limited to, the user's arm, wrist, hand, neck, throat, tongue, or face). For example, the IMU(s) may be arranged to detect movements of one or more body segments proximal to the torso, whereas the neuromuscular sensors may be arranged to detect movements of one or more body segments distal to the torso. It should be appreciated, however, that autonomous sensors 110 may be arranged in any suitable way, and embodiments of the technology described herein are not limited based on the particular sensor arrangement. For example, in some embodiments, at least one IMU and a plurality of neuromuscular sensors may be co-located on a body segment to track movements of body segment using different types of measurements. In one implementation, an IMU sensor and a plurality of EMG sensors are arranged on a wearable device configured to be worn around the user's neck and/or proximate to the user's face. In one implementation described in more detail below, an IMU sensor and a plurality of EMG sensors are arranged on a wearable device configured to be worn around the lower arm or wrist of a user. In such an arrangement, the IMU sensor may be configured to track movement or activation information (e.g., positioning and/or orientation over time) associated with one or more arm segments, to determine, for example whether the user has raised or lowered their arm, whereas the EMG sensors may be configured to determine movement or activation information associated with wrist or hand segments to determine, for example, whether the user has an open or closed hand configuration.


Each of autonomous sensors 110 includes one or more sensing components configured to sense movement information or activation information from the user. The movement or activation sensed by the autonomous sensors 110 may correspond to muscle activation at a fixed point in time (e.g., the user making a thumbs up gesture or tensing arm muscles) or may correspond to the user performing a movement over a period of time (e.g., the user moving their arm in an arc). The autonomous sensors 110 may sense movement information when the user performs a movement, such as a gesture, a movement of a portion of the user's body (including, but not limited to, the user's arm, wrist, hand, neck, throat, tongue, or face), or another suitable movement. The autonomous sensors 110 may sense activation information when the user performs an activation, such as forces applied to external objects without movement, balanced forces (co-contraction), activation of individual muscle fibers (e.g., muscle fibers too weak to cause noticeable movement), or another suitable activation. In the case of IMUs, the sensing components may include one or more accelerometers, gyroscopes, magnetometers, or any combination thereof to measure characteristics of body motion, examples of which include, but are not limited to, acceleration, angular velocity, and sensed magnetic field around the body. In the case of neuromuscular sensors, the sensing components may include, but are not limited to, electrodes configured to detect electric potentials on the surface of the body (e.g., for EMG sensors) vibration sensors configured to measure skin surface vibrations (e.g., for MMG sensors), and acoustic sensing components configured to measure ultrasound signals (e.g., for SMG sensors) arising from muscle activity.


In some embodiments, the output of one or more of the sensing components may be processed using hardware signal processing circuitry (e.g., to perform amplification, filtering, and/or rectification). In other embodiments, at least some signal processing of the output of the sensing components may be performed in software. Thus, signal processing of autonomous signals recorded by autonomous sensors 110 may be performed in hardware, software, or by any suitable combination of hardware and software, as aspects of the technology described herein are not limited in this respect.


In some embodiments, the recorded sensor data may be processed to compute additional derived measurements that are then provided as input to a statistical model, as described in more detail below. For example, recorded signals from an IMU sensor may be processed to derive an orientation signal that specifies the orientation of a rigid body segment over time. Autonomous sensors 110 may implement signal processing using components integrated with the sensing components, or at least a portion of the signal processing may be performed by one or more components in communication with, but not directly integrated with the sensing components of the autonomous sensors.


In some embodiments, at least some of the plurality of autonomous sensors 110 are arranged as a portion of a wearable device configured to be worn on or around part of a user's body. For example, in one non-limiting example, an IMU sensor and a plurality of neuromuscular sensors are arranged circumferentially around an adjustable and/or elastic band such as a wristband or armband configured to be worn around a user's wrist or arm. Alternatively or additionally, at least some of the autonomous sensors may be arranged on a wearable patch configured to be affixed to a portion of the user's body.


In one implementation, 16 EMG sensors are arranged circumferentially around an elastic band configured to be worn around a user's lower arm. For example, FIG. 6 shows EMG sensors 504 arranged circumferentially around elastic band 502. It should be appreciated that any suitable number of neuromuscular sensors may be used and the number and arrangement of neuromuscular sensors used may depend on the particular application for which the wearable device is used. For example, a wearable armband or wristband may be used to predict musculo-skeletal position information for hand-based motor tasks, whereas a wearable leg or ankle band may be used to predict musculo-skeletal position information for foot-based motor tasks. For example, as shown in FIG. 7, a user 506 may be wearing elastic band 502 on hand 508. In this way, EMG sensors 504 may be configured to record EMG signals as a user controls keyboard 510 using fingers 512. In some embodiments, elastic band 502 may also include one or more IMUs (not shown), configured to record movement or activation information, as discussed above.


In some embodiments, multiple wearable devices, each having one or more IMUs and/or neuromuscular sensors included thereon may be used to predict musculo-skeletal position information for movements that involve multiple parts of the body.


System 100 also includes voice interface 120 configured to receive audio input. For example, voice interface 120 may include a microphone that when activated, receives speech data, and processor(s) 112 may perform automatic speech recognition (ASR) based on the speech data. Audio input including speech data may be processed by an ASR system, which converts audio input to recognized text. The received speech data may be stored in a datastore (e.g., local or remote storage) associated with system 100 to facilitate the ASR processing. In some embodiments, ASR processing may be performed in whole or in part by one or more computers (e.g., a server) remotely located from voice interface 120. For example, in some embodiments, speech recognition may be perfomled locally using an embedded ASR engine associated with voice interface 120, a remote ASR engine in network communication with voice interface 120 via one or more networks, or speech recognition may be performed using a distributed ASR system including both embedded and remote components. Additionally, it should be appreciated that computing resources used in accordance with the ASR engine may also be located remotely from voice interface 120 to facilitate the ASR processing described herein, as aspects of the invention related to ASR processing are not limited in any way based on the particular implementation or arrangement of these components within system 100.


System 100 also includes one or more computer processor(s) 112 programmed to communicate with autonomous sensors 110 and/or voice interface 120. For example, signals recorded by one or more of the autonomous sensors 110 may be provided to processor(s) 112, which may be programmed to perform signal processing, non-limiting examples of which are described above. In another example, speech data recorded by voice interface 120 may be provided to processor(s) 112, which may be programmed to perform automatic speech recognition, non-limiting examples of which are described above. Processor(s) 112 may be implemented in hardware, firmware, software, or any combination thereof. Additionally, processor(s) 112 may be co-located on a same wearable device as one or more of the autonomous sensors or the voice interface or may be at least partially located remotely (e.g., processing may occur on one or more network-connected processors).


System 100 also includes datastore 114 in communication with processor(s) 112. Datastore 114 may include one or more storage devices configured to store information describing a statistical model used for predicting musculo-skeletal position information based on signals recorded by autonomous sensors 110 in accordance with some embodiments. Processor(s) 112 may be configured to execute one or more machine learning algorithms that process signals output by the autonomous sensors 110 to train a statistical model stored in datastore 114, and the trained (or retrained) statistical model may be stored in datastore 114 for later use in generating a musculo-skeletal representation. Non-limiting examples of statistical models that may be used in accordance with some embodiments to predict musculo-skeletal position information based on recorded signals from autonomous sensors are discussed in more detail below.


In some embodiments, a set of training data, including sensor data from the autonomous sensors 110 and/or speech data from the voice interface 120, is obtained for training the statistical model. This training data may also be referred to as ground truth data. The training data may be obtained by prompting the user at certain times to perform a movement or activation and capturing the corresponding sensor data and/or speech data. Alternatively or additionally, the training data may be captured when the user is using a device, such as a keyboard. For example, the captured training data may include the user's EMG signal data and the user's corresponding key presses from a key logger. Alternatively or additionally, the training data may include ground truth joint angles corresponding to the user's movement or activation. The ground truth joint angles may be captured using, e.g., a camera device, while the user performs the movement or activation. Alternatively or additionally, the training data may include sensor data corresponding to a movement or activation performed by the user and annotated with speech data corresponding to the user speaking at the same time as performing the movement or activation. For example, the user may perform a gesture, such as a thumbs up gesture, and speak a word, such as “edit,” to indicate that the gesture relates to an edit function. Alternatively or additionally, the training data may be captured when the user is using a writing implement or instrument, such as a pen, a pencil, a stylus, or another suitable writing implement or instrument. For example, the captured training data may include EMG signal data recorded when the user is prompted to write one or more characters, words, shorthand symbols, and/or another suitable written input using a pen. Optionally, the motion of the writing implement or instrument may be recorded as the user writes. For example, an electronic stylus (or another device configured to record motion) may record motion of the electronic stylus as the user writes a prompted word using the electronic stylus. Accordingly, the captured training data may include recorded EMG signal data and the corresponding recorded motion of the writing implement or instrument as the user writes one or more letters, words, shorthand symbols, and/or another suitable written input using the writing implement or instrument.


In some embodiments, processor(s) 112 may be configured to communicate with one or more of autonomous sensors 110, for example, to calibrate the sensors prior to measurement of movement or activation information. For example, a wearable device may be positioned in different orientations on or around a part of a user's body and calibration may be performed to determine the orientation of the wearable device and/or to perform any other suitable calibration tasks. Calibration of autonomous sensors 110 may be performed in any suitable way, and embodiments are not limited in this respect. For example, in some embodiments, a user may be instructed to perform a particular sequence of movements or activations and the recorded movement or activation information may be matched to a template by virtually rotating and/or scaling the signals detected by the sensors (e.g., by the electrodes on EMG sensors). In some embodiments, calibration may involve changing the gain(s) of one or more analog to digital converters (ADCs), for example, in the case that the signals detected by the sensors result in saturation of the ADCs.


System 100 optionally includes one or more controllers 116 configured to receive a control signal based, at least in part, on processing by processor(s) 112. As discussed in more detail below, processor(s) 112 may implement one or more trained statistical models 114 configured to predict musculo-skeletal position information based, at least in part, on signals recorded by autonomous sensors 110 worn by a user. One or more control signals determined based on the output of the trained statistical model(s) may be sent to controller 116 to control one or more operations of a device associated with the controller. In some embodiments, system 100 does not include one or more controllers configured to control a device. In such embodiments, data output as a result of processing by processor(s) 112 (e.g., using trained statistical model(s) 114) may be stored for future use or transmitted to another application or user.


In some embodiments, during real-time tracking, information sensed from a single armband/wristband wearable device that includes at least one IMU and a plurality of neuromuscular sensors is used to reconstruct body movements, such as reconstructing the position and orientation of both the forearm, upper arm, wrist and hand relative to a torso reference frame using a single arm/wrist-worn device, and without the use of external devices or position determining systems. For brevity, determining both position and orientation may also be referred to herein generally as determining movement.


As discussed above, some embodiments are directed to using a statistical model for predicting musculo-skeletal position information based on signals recorded from wearable autonomous sensors. The statistical model may be used to predict the musculo-skeletal position information without having to place sensors on each segment of the rigid body that is to be represented in a computer-generated musculo-skeletal representation of user's body. As discussed briefly above, the types of joints between segments in a multi-segment articulated rigid body model constrain movement of the rigid body. Additionally, different individuals tend to move in characteristic ways when performing a task that can be captured in statistical patterns of individual user behavior. At least some of these constraints on human body movement may be explicitly incorporated into statistical models used for prediction in accordance with some embodiments. Additionally or alternatively, the constraints may be learned by the statistical model through training based on recorded sensor data. Constraints imposed in the construction of the statistical model are those set by anatomy and the physics of a user's body, while constraints derived from statistical patterns are those set by human behavior for one or more users from which sensor measurements are measured. As described in more detail below, the constraints may comprise part of the statistical model itself being represented by information (e.g., connection weights between nodes) in the model.


In some embodiments, system 100 may be trained to predict musculo-skeletal position information as a user moves or activates muscle fibers. In some embodiments, the system 100 may be trained by recording signals from autonomous sensors 110 (e.g., IMU sensors, EMG sensors) and position information recorded from position sensors worn by one or more users as the user(s) perform one or more movements. The position sensors, described in more detail below, may measure the position of each of a plurality of spatial locations on the user's body as the one or more movements are performed during training to determine the actual position of the body segments. After such training, the system 100 may be configured to predict, based on a particular user's autonomous sensor signals, musculo-skeletal position information (e.g., a set of joint angles) that enable the generation of a musculo-skeletal representation without the use of the position sensors.


As discussed above, some embodiments are directed to using a statistical model for predicting musculo-skeletal position information to enable the generation of a computer-based musculo-skeletal representation. The statistical model may be used to predict the musculo-skeletal position information based on IMU signals, neuromuscular signals (e.g., EMG, MMG, and SMG signals), or a combination of IMU signals and neuromuscular signals detected as a user performs one or more movements.



FIG. 2 describes a process 200 for using neuromuscular information to improve speech recognition. Process 200 may be executed by any suitable computing device(s), as aspects of the technology described herein are not limited in this respect. For example, process 200 may be executed by processor(s) 112 described with reference to FIG. 1. As another example, one or more acts of process 200 may be executed using one or more servers (e.g., servers included as a part of a cloud computing environment). For example, at least a portion of act 204 relating to determining a musculo-skeletal representation of the user may be performed using a cloud computing environment. Although process 200 is described herein with respect to processing IMU and EMG signals, it should be appreciated that process 200 may be used to predict neuromuscular information based on any recorded autonomous signals including, but not limited to, IMU signals, EMG signals, MMG signals, SMG signals, or any suitable combination thereof and a trained statistical model trained on such autonomous signals.


Process 200 begins at act 202, where speech data is obtained for one or multiple users from voice interface 120. For example, voice interface 120 may include a microphone that samples audio input at a particular sampling rate (e.g., 16 kHz), and recording speech data in act 202 may include sampling audio input by the microphone, Sensor data for a plurality of neuromuscular signals may be obtained from sensors 110 in parallel, prior to, or subsequent to obtaining the speech data from voice interface 120. For example, speech data corresponding to a word from the user may obtained at the same time as sensor data corresponding to a gesture from the user to change the formatting of the word. In another example, speech data corresponding to a word from the user may obtained, and at a later time, sensor data may be obtained corresponding to a gesture from the user to delete the word. In yet another example, sensor data may be obtained corresponding to a gesture from the user to change the formatting for text output in the future, and at a later time, speech data corresponding to a word from the user may obtained and formatted accordingly. Optionally, process 200 proceeds to act 204, where the plurality of neuromuscular signals from sensors 110, or signals derived from the plurality of neuromuscular signals, are provided as input to one or more trained statistical models and a musculo-skeletal representation of the user is determined based, at least in part, on an output of the one or more trained statistical models.


In some embodiments, signals are recorded from a plurality of autonomous sensors arranged on or near the surface of a user's body to record activity associated with movements or activations of the body during performance of a task. In one example, the autonomous sensors comprise an IMU sensor and a plurality of EMG sensors arranged circumferentially (or otherwise oriented) on a wearable device configured to be worn on or around a part of the user's body, such as the user's arm. In some embodiments, the plurality of EMG signals are recorded continuously as a user wears the wearable device including the plurality of autonomous sensors.


In some embodiments, the signals recorded by the autonomous sensors are optionally processed. For example, the signals may be processed using amplification, filtering, rectification, or other types of signal processing. In some embodiments, filtering includes temporal filtering implemented using convolution operations and/or equivalent operations in the frequency domain (e.g., after the application of a discrete Fourier transform). In some embodiments, the signals are processed and used as training data to train the statistical model.


In some embodiments, the autonomous sensor signals are provided as input to a statistical model (e.g., a neural network) trained using any suitable number of layers and any suitable number of nodes in each layer. In some embodiments that continuously record autonomous signals, the continuously recorded autonomous signals (raw or processed) may be continuously or periodically provided as input to the trained statistical model for prediction of a musculo-skeletal representation for the given set of input sensor data. In some embodiments, the trained statistical model is a user-independent model trained based on autonomous sensor and position information measurements from a plurality of users. In other embodiments, the trained model is a user-dependent model trained on data recorded from the individual user from which the data recorded in act 204 is also acquired.


In some embodiments, after the trained statistical model receives the sensor data as a set of input parameters, a predicted musculo-skeletal representation is output from the trained statistical model. In some embodiments, the predicted musculo-skeletal representation may comprise a set of body position information values (e.g., a set of joint angles) for a multi-segment articulated rigid body model representing at least a portion of the user's body. In other embodiments, the musculo-skeletal representation may comprise a set of probabilities that the user is performing one or more movements or activations from a set of possible movements or activations.


Next, process 200 proceeds to act 206, where an instruction for modifying an operation of a speech recognizer Is determined, and the instruction Is provided to the speech recognizer, In embodiments, where process 200 does not include act 204, the instruction for modifying the operation of the speech recognizer is determined based, at least in part, on an output of the one or more trained statistical models. For example, the one or more trained statistical models may directly map sensor data, e.g., EMG signal data, to the instruction for modifying the operation of the speech recognizer. In embodiments where process 200 includes ad 204, the instruction for modifying the operation of the speech recognizer is determined based on the musculo-skeletal representation determined in act 204. In some embodiments, process 200 modifies the speech recognition process. For example, process 200 may modify at least a portion of text output from the speech recognizer, where the modification may relate to punctuation, spelling, formatting, or another suitable modification of the text. In another example, process 200 may change a caps lock mode of the speech recognizer. In yet another example, process 200 may change a language mode of the speech recognizer. For example, the speech recognizer may be instructed to change from recognizing English to recognizing French. Some embodiments include a communications interface configured to provide the instruction from a processor, e.g., processor(s) 112, to the speech recognizer. In some embodiments, a processor, e.g., processor(s) 112, is programmed to execute the speech recognizer. Process 200 proceeds to step 208, where speech recognition is resumed, e.g., for speech data recorded at act 202 or other suitable audio input.



FIG. 3 describes a process 300 for using neuromuscular information to improve speech recognition. Process 300 may be executed by any suitable computing device(s), as aspects of the technology described herein are not limited in this respect. For example, process 300 may be executed by processor(s) 112 described with reference to FIG. 1. As another example, one or more acts of process 300 may be executed using one or more servers (e.g., servers included as a part of a cloud computing environment). For example, at least a portion of act 314 relating to determining an edit and/or correct operation based on sensor data may be performed using a cloud computing environment. Although process 300 is described herein with respect to IMU and EMG signals, it should be appreciated that process 300 may be used to predict neuromuscular information based on any recorded autonomous signals including, but not limited to, IMU signals, EMG signals, MMG signals, SMG signals, or any suitable combination thereof and a trained statistical model trained on such autonomous signals.


Process 300 begins at act 310, where speech recognition results are obtained, e.g., from speech data received from voice interface 120. In some embodiments, processor(s) 112 may perform ASR based on the speech data to generate the speech recognition results. In some embodiments, audio input including speech data may be processed by an ASR system, which produces speech recognition results by converting audio input to recognized text. The received speech data may be stored in a datastore (e.g., local or remote storage) associated, with system 100 lo facilitate the ASR processing.


Next, at act 312, sensor data is received, for example, from sensors 110. The sensor data may be recorded and processed as described with respect to the process of FIG. 1. The sensor data may include a plurality of neuromuscular signals and/or signals derived from the plurality of neuromuscular signals. The sensor data may be provided as input to one or more trained statistical models and the musculo-skeletal representation of the user may be determined based, at least in part, on an output of the one or more trained statistical models. Process 300 then proceeds to act 314, where an edit and/or correct operation is determined based on the sensor data. An instruction relating to the edit and/or correct operation of the speech recognizer is determined based on the determined musculo-skeletal representation, and the instruction is provided to the speech recognizer.


Next, process 300 proceeds to act 316 where the edit and/or correct operation is performed on the speech recognition results. For example, the edit and/or correct operation may be performed on the speech recognition results by allowing a user to edit and correct speech recognition results by selecting possibilities from a list. In another example, the edit and/or correct operation may be performed on the speech recognition results by allowing the user to initiate a spelling mode and correct spellings for one or more words in the speech recognition results. In yet another example, the edit and/or correct operation may be performed on the speech recognition results by allowing the user to delete one or more words in the speech recognition results. In another example, the edit and/or correct operation on the speech recognition results may be performed by allowing the user to scroll through the speech recognition results and insert one or more words at a desired insertion point in the speech recognition results. In another example, the edit and/or correct operation may be performed on the speech recognition results by allowing the user to select and replace one or more words in the speech recognition results. In another example, the edit and/or correct operation may be performed on the speech recognition results by auto-completing a frequently used phrase in the speech recognition results or allowing the user to select from a list of suggested completions for a phrase in the speech recognition results.



FIG. 4 describes a process 400 for using neuromuscular information to improve speech recognition. Process 400 may be executed by any suitable computing device(s), as aspects of the technology described herein are not limited in this respect. For example, process 400 may be executed by processor(s) 112 described with reference to FIG. 1. As another example, one or more acts of process 400 may be executed using one or more servers (e.g., servers included as a part of a cloud computing environment). For example, at least a portion of act 412 relating to detecting EMG-based control information may be performed using a cloud computing environment. Although process 400 is described herein with respect to IMU and EMG signals, it should be appreciated that process 400 may determine neuromuscular information based on any recorded autonomous signals including, but not limited to, IMU signals, EMG signals, MMG signals, SMG signals, or any suitable combination thereof and a trained statistical model trained on such autonomous signals.


Process 400 begins at act 410, where control information is monitored, e.g., for one or more movements or activations performed by the user. For example, process 400 may monitor one or more EMG signals relating to neuromuscular information while speech data is obtained for one or multiple users from voice interface 120. Voice interface 120 may include a microphone that samples audio input at a particular sampling rate (e.g., 16 kHz). Sensor data relating to the control information may be received from sensors 110. The sensor data may include a plurality of neuromuscular signals and/or signals derived from the plurality of neuromuscular signals.


Next, process 400 proceeds to act 412, where it is determined whether control information relating to a particular movement or activation is detected. The sensor data may be provided as input to one or more trained statistical models and control information of the user may be determined based, at least in part, on an output of the one or more trained statistical models. The sensor data may be provided as input to a trained statistical model to determine control information as described with respect to FIG. 2.


If it is determined that control information for a particular movement or activation is detected, process 400 proceeds to act 414, where an action associated with speech recognition, and determined based on the detected control information, is performed. Otherwise, process 400 returns to act 410 to continue monitoring for control information. Performing an action with speech recognition may include, but is not limited to, altering a mode of the speech recognizer, starting or stopping the speech recognizer, or another suitable action associated with the speech recognizer. In another example, the user may perform a specific gesture to toggle the speech recognizer on and off, hold the gesture to keep the speech recognizer on, or hold a mute gesture to mute the speech recognizer. Determining an instruction for performing an action for the speech recognizer may be based on the determined control information, and the instruction may be provided to the speech recognizer. For example, the action associated with speech recognition may be performed by allowing a user to start or stop speech recognition, e.g., by making a gesture imitating a press of a button on a tape recorder. In another example, the action associated with speech recognition may be performed by allowing a user to initiate a spell check mode. In yet another example, the action associated with speech recognition may be performed by allowing a user to change the language of input by making a related gesture.



FIG. 5 describes a process 500 for using neuromuscular information to improve speech recognition. Process 500 may be executed by any suitable computing device(s), as aspects of the technology described herein are not limited in this respect. For example, process 500 may be executed by processor(s) 112 described with reference to FIG. 1. As another example, one or more acts of process 500 may be executed using one or more servers (e.g., servers included as a part of a cloud computing environment). For example, at least a portion of act 580 relating to determining model estimates may be performed using a cloud computing environment. Although process 500 is described herein with respect to IMU and EMG signals, it should be appreciated that process 500 may determine neuromuscular information based on any recorded autonomous signals including, but not limited to, IMU signals, EMG signals, MMG signals, SMG signals, or any suitable combination thereof and a trained statistical model trained on such autonomous signals.


In some embodiments, process 500 provides for a hybrid neuromuscular and speech input interface where a user may fluidly transition between using speech input, using neuromuscular input or using both speech input and neuromuscular input to perform speech recognition. The neuromuscular input may track body position information, movement, hand state, gestures, activations (e.g., from muscle fibers too weak to cause noticeable movement) or other suitable information relating to the plurality of recorded neuromuscular signals. In some embodiments, the speech input and neuromuscular input are used to provide for lower error rates in speech recognition. In other embodiments, the speech input and the neuromuscular input may be used selectively where one mode of input is preferable over the other. For example, in situations where it is not possible to speak aloud, only the neuromuscular input may be used to perform recognition.


At act 552 of process 500, sensor data is recorded, e.g., from sensors 110, and at act 554, the recorded sensor data is optionally processed. The sensor data may include a plurality of neuromuscular signals and/or signals derived from the plurality of neuromuscular signals. At act 562 of process 500, speech data is recorded, e.g., from one or multiple users from voice interface 120, and at act 564, the recorded speech data is optionally processed. Voice interface 120 may include a microphone that samples audio input at a particular sampling rate (e.g., 16 kHz), and the speech data may be recorded by sampling audio input received by the microphone.


At act 570 of process 500, one or both of the processed or unprocessed sensor data and speech data is provided as input to one or more trained statistical models. In some embodiments, both sensor data and speech data are input to the trained statistical model(s) to provide for lower speech recognition error rates. The statistical model(s) may be trained on both inputs used in parallel. In some embodiments, only one of the sensor data or the speech data may be provided as input to the trained statistical models. The statistical models trained on both inputs may be configured to gracefully transition between speech-only mode, sensor-mode, and combined speech+sensor data mode based on particular conditions of the system use, for example, when only one input is available. In some embodiments, both the speech data, e.g., audio input, and the sensor data, e.g., a plurality of neuromuscular signals, are provided as input to the one or more trained statistical models. The audio input may be provided as input to the one or more trained statistical models at a first time and the plurality of neuromuscular signals is provided as input to the one or more trained statistical models at a second time different from the first time. Alternatively, the speech data and the sensor data may be provided as input to the one or more trained statistical models simultaneously.


At act 580 of process 500, a speech recognition result (e.g., text) for the input sensor and/or speech data is determined based, at least in part, on an output of the one or more trained statistical models. In some embodiments, the speech recognition result is determined by processing the audio input to determine a first portion of the text, and by processing the plurality of neuromuscular signals to determine a second portion of the text. In some embodiments, the one or more trained statistical models include a first trained statistical model for determining the text based on the audio input and a second trained statistical model for determining the text based on the plurality of neuromuscular signals.


The speech recognition result may be determined for at least a first portion of the text based on a first output of the first trained statistical model. In some embodiments, the text is further determined for at least a second portion of the text based on a second output of the second trained statistical model. In some embodiments, the first portion and the second portion are overlapping. For example, the first three-quarters of the text may be determined using speech input whereas the second three-quarters of the text may be determined using neuromuscular input, with the middle of the text being determined using both speech and neuromuscular input. In this example, the user may have provided both speech input and neuromuscular input from the one-quarter mark to the three-quarter mark, while only providing speech input or neuromuscular input otherwise. In some embodiments, the first portion and the second portion are non-overlapping. For example, the first half of the text may be determined using speech input whereas the second half of the text may be determined using neuromuscular input.


In some embodiments, one or more statistical models for a hybrid neuromuscular and speech input interface are provided such that a first statistical model is trained for determining the text based on the audio input and a second statistical model is trained for determining the text based on the plurality of neuromuscular signals. Such a model implementation may be advantageous for faster training of new movements or activations because only the second statistical model need be updated in the training process. It is noted that the model implementation for the hybrid neuromuscular and speech input interface need not be limited to the described implementation. For example, such systems may employ one model for processing both neuromuscular and speech inputs or multiple models for processing each of the neuromuscular and speech inputs. Further details on how to combine the outputs of such models are provided below.


In some embodiments, an ASR model is provided and subsequently trained to personalize the ASR model according to EMG-based sensor data received for the user. For example, the ASR model may be provided as an artificial neural network with one or more layers, each layer including nodes with assigned weights. A layer of the artificial neural network may receive input in the form of EMG-based sensor data to learn the movements or activations from the user and corresponding output, e.g., text. Alternatively or additionally, the weights in one or more layers of the artificial neural network may be adapted to learn the movements or activations from the user and corresponding output. In some embodiments, a single model receives both speech data and EMG-based sensor data as inputs and the model is trained to generate output corresponding to these inputs. For example, the model may be provided with data collected as the user speaks, e.g., a phrase, and performs a corresponding movement or activation. In some embodiments, an engineered combination of models is provided where EMG-based sensor data relating to neuromuscular information is used to switch between one or more trained statistical models trained on speech data. For example, the EMG-based sensor data may be used to determine when a user makes a movement or activation to switch a language mode of the speech recognizer. Accordingly, if it is determined that the user desires a different language mode, the trained statistical model corresponding to the desired language mode is selected.


In some embodiments, the output predictions of a first statistical model (trained for determining text based on speech data, also referred to as a language model) and a second statistical model (trained for determining text based on sensor data, such as EMG signals) are combined as described below.


For notation, P(A|B) is defined as the conditional probability of A given B. The language model may give a prior distribution P(text) over the possible text utterances. Bayes rule may be applied to calculate the probability of the text given the observed speech and EMG sensor data, according to the following formula:

P(text|speech,EMG)=P(speech,EMG|text)*P(text)/P(speech,EMG)


For optimizing the output predictions, i.e., text, the term P(speech, EMG) may be ignored and the combination may focus on the proportionality relationship, according to the following formula:

P(text|speech,EMG)˜P(speech,EMG|text)*P(text)


The speech data and the EMG data may be assumed to be conditionally independent given the output text, according to the following formula:

P(speech,EMG|text)=P(speech|text)*P(EMG|text)


This assumption yields following formula:

P(text|speech,EMG)˜P(speech|text)*P(EMG|text)*P(text)


In embodiments where the individual models have a stage at which they output these conditional probabilities, the above formula may be applied directly.


In embodiments where the models output the P(text|speech) and P(text|EMG), Bayes rule may be applied, according to the following formulas:

P(speech|text)=P(text|speech)*P(speech)/P(text), and P(EMG|text)=P(text|EMG)*P(EMG)/P(text)


These two equations may be substituted into the formula derived above, according to the following formula:

P(text|speech,EMG)˜P(text|speech)*P(speech)*P(text|EMG)*P(EMG)/P(text)


Finally, the terms with just speech and EMG may be dropped because output predictions are being optimized over text, according to the following formula:

P(text|speech,EMG)˜P(text|speech)*P(text|EMG)/P(text)


This formula combines a speech model that gives P(text|speech) with an EMG model that gives P(text|EMG).


In some embodiments, only one of the substitutions may be applied if a model gives P(EMG|text), according to the following formula:

P(text|speech,EMG)˜P(text|speech)*P(EMG|text)


In some embodiments, the prior distribution of words/phrases in the language model is altered, e.g., when the gesture provides context for interpreting the speech. For example, the gesture may be a natural gesture a user makes in a given context to switch modes, such as a making a first gesture to switch to a proper noun mode. In proper noun mode, the language model output is biased such that proper nouns have a higher prior probability. If the language model is made aware of the upcoming input of a proper noun, the output of the model is more likely to be text for a proper noun. For example, the prior probability of proper nouns may be multiplied by a number greater than one to increase the bias for proper nouns. The language model may function in the same manner as before the switch to proper noun mode, except for applying a higher prior probability to proper nouns.


In some embodiments, the described systems and methods allow for obtaining one or more neuromuscular signals (e.g., EMG signals) in parallel with or substantially at the same time as obtaining speech data for one or multiple users. The neuromuscular information derived from the signals may be used to modify the behavior of the speech recognizer, e.g., switch to another mode of the speech recognizer. For example, neuromuscular information derived from neuromuscular signals from a user may indicate that the user wishes to activate a “spell mode” of the speech recognizer. Accordingly, the neuromuscular information may be used to switch the mode of the speech recognizer to character-based text entry. The user may make movements or activations and the corresponding neuromuscular information may be used to interpret the characters the user wishes to enter. Subsequently, neuromuscular information derived from neuromuscular signals from the user may indicate that the user wishes to deactivate the “spell mode” of the speech recognizer. In this manner, the user may alternate between speech input (e.g., to enter words) and neuromuscular input (e.g., to enter characters) in order to enter the desired text. In some embodiments, when switching to “spell mode,” the speech recognizer swaps a language model suitable for speech input (e.g., to enter words) with another language model suitable for neuromuscular input (e.g., to enter characters). In some embodiments, when switching to “spell mode,” the language model output is biased towards character-based text entry. For example, a prior distribution in the language model is selected to better recognize character-based entry. If the language model is made aware of the upcoming input of character-based text entry, the output of the model is more likely to recognize the characters as spelling out one or more words.


Some embodiments of the systems and methods described herein provide for determining text input with model(s) that use a linguistic token, such as phonemes, characters, syllables, words, sentences, or another suitable linguistic token, as the basic unit of recognition. An advantage of using phonemes as the linguistic token may be that using a phoneme-based representation is more similar to the natural speech language processing than character-based typing. Additionally, using a phoneme-based model may provide faster recognition performance than a character-based model approach because the phoneme-based approach uses a denser encoding compared to using characters.


For the implementation using phonemes as the linguistic token, the inventors have recognized that creating a phoneme-based vocabulary that is easy to learn and recognize may be challenging in part because the number of phonemes in a language (e.g., 36 phonemes for English) may be larger than the number of characters in the language (e.g., 26 characters). In some embodiments, the text input may be performed using an adaptive movement or activation information recognizer instead of a fixed phoneme vocabulary. In some embodiments, a speech synthesizer provides audio feedback to the user while the user trains the adaptive system to create a mapping between body position information (e.g., movement, hand states, and/or gestures) and phonemes. In some embodiments, the training system may be presented to the user as a game, e.g. a mimicry game. Language models may be applied to the input, similar to a speech recognizer, to decode EMG signals through soft phoneme predictions into text.


In some embodiments, the described systems and methods allow for the user to “speak” with their hands by providing hand states that correspond to different linguistic tokens, such as phonemes. For example, some gesture-based language techniques, such as American Sign Language, map gestures to individual characters (e.g., letters) or entire words. Some embodiments are directed to allowing the user to “speak” with their hands using an intermediate level of representation between characters and entire words, that more closely represents speech production. For example, a phoneme representation may be used and a model may map the user's hand states to particular phonemes. A phoneme-based system may provide a measure of privacy because a user may perform the movement or activation, such as the gesture, without moving or with little motion. It is noted that such movement-free or limited-movement systems need not be limited to using phonemes as their linguistic token. For example, such systems may use another linguistic token, such as characters. Such a system may also enable the user to provide input faster than they could using individual characters, but without having to learn movements or activations for a large vocabulary of words. For example, a phoneme-based system may provide for a speed of 200 words per minute, which is faster than a typical character typing rate. It is noted that such systems may additionally or alternatively use another linguistic token, such as common letter combinations found on a stenographer's keyboard.


In some embodiments, the described systems and methods allow for the user to “speak” with their hands by providing movement or activation that correspond to different linguistic tokens, such as characters. In using such a character representation, a model may map EMG signals for the user's hand states to particular characters. For example, the user may type on a flat surface as if it were a keyboard and perform hand states for keys corresponding to the characters the user wishes to enter. Such a character-based text entry (e.g., via detection of EMG signals) may be combined with speech-based text entry. The user may use speech-based text entry for initial text but, for example at a later point in time, switch modes to character-based text entry (e.g. enter “spell mode”) and input hand states corresponding to the characters the user wishes to enter. In other embodiments, speech-based entry may be processed in parallel with text entry, such as using a speech command to change entry mode while typing (e.g., changing to all capitals, executing a control key operation, etc.) or modify a current input from or output to another device (e.g., a keyboard, a heads-up display, etc.). Any combination of entry using speech-based recognition and EMG signal processing may be performed to derive one or more multi-dimensional input/output mode(s) according to various embodiments.


In some embodiments, the described systems and methods allow for adaptive training of one or more statistical models to map neuromuscular information to linguistic tokens, such as phonemes. For example, the user may be asked to produce one or more simple words using hand states corresponding to phonemes. In some embodiments, the training may not be directed to explicitly generating neuromuscular information, e.g., for a gesture, to phoneme mappings for the user. Instead, the user may be asked to produce hand states for one or more words and the statistical models may be adapted based on the information learned from this process. For example, the user may be presented with a user interface that displays a training “game,” where the user earns points for every correct hand state made to produce one or more target words. In some embodiments, a speech synthesizer may provide audio feedback to the user based on the phonemes produced by the user's hand states. The feedback may provide the user understanding on how to improve his or her hand states to produce the correct phonemes for the target words.


In some embodiments, the described systems and methods allow for the user to define an individualized mapping from neuromuscular information to linguistic tokens such as phonemes, by selecting what hand state, gesture, movement, or activation to use for each phoneme. For example, the user may train the one or more statistical models using small finger movements or muscle activations detectable by sensors 110. If two movements are close to each other, the user may be asked to make the movement slightly differently to distinguish between the two movements. In some embodiments, feedback may be provided by the system to the user to encourage the user to produce movements or activations that are distinct from each other to enable the system to learn a better mapping from movement or activation to phoneme.


In some embodiments, a pre-trained fixed mapping, analogous to typing on a regular keyboard may be provided and the pre-trained mapping may be adapted or individualized to the user's movement or activation characteristics as the user uses the system. In such an adaptive system, the user may be able to minimize their movement over time to achieve the same system performance, such that smaller and smaller movements may be sufficient to produce neuromuscular signals mapped to different phonemes recognizable by the system. The system may be configured to adapt to the user's movements or activations in the background as the user is performing typical everyday tasks. For example, the system may be configured to track keys pressed by a user (e.g., using a key logger) as the user wears the wearable device of the system while typing on a keyboard, and the system may be configured to determine mappings between neuromuscular information, as the user types, and the recorded keystrokes.


Moreover, the system may not be limited to training in a phase separate from use of the system. In some embodiments, the system is configured to adapt a pre-trained mapping or another suitable mapping based on information from tracking a signal from the user indicating an erroneous text entry. For example, the signal may include a voice command (e.g., “backspace,” “undo,” “delete word,” or another suitable voice command indicating an error was made), one or more neuromuscular signals (e.g., a gesture relating to a command, such as “backspace,” “undo,” “delete word,” or another suitable command indicating an error was made), a signal from the user accepting an auto-correction of an erroneous text entry, or another suitable user signal indicating an erroneous text entry. The system may adapt a pre-trained mapping or another suitable mapping to the user based on this tracked information.


In some embodiments, the system is configured to adapt a pre-trained mapping or another suitable mapping based on consistency with a language model. For example, in absence of the adaptation to the language model, the system may determine output text to be “she yikes to eat ice cream,” instead of “she likes to eat ice cream.” The language model may include prior probabilities of certain combinations of words, phrases, sentences, or another suitable linguistic token, and the system may select the output text corresponding to a higher probability in the language model. For example, the language model may indicate that the phrase “likes to eat” has a higher probability than the phrase “yikes to eat.” Accordingly, to be consistent with the language model, the system may adapt the pre-trained mapping or another suitable mapping and select output text having the higher probability, e.g., “she likes to eat ice cream.”


In some embodiments, the system is configured to map neuromuscular information (derived from one or more neuromuscular signals, e.g., EMG signals) to an error indication from the user. For example, the user may tense one or more muscles after the system erroneously interprets a word the user spoke correctly. The neuromuscular signals relating to that movement or activation from the user may be mapped as an error indication from the user. In this manner, the user is not required to provide a training signal particularly relating to an error indication. In some embodiments, when the system detects neuromuscular information relating to the error indication, the system automatically corrects the error. For example, the system may automatically delete the last interpreted word. In another example, the system may provide the user with one or more options to correct the last interpreted word. In yet another example, the system may automatically replace the last interpreted word with another interpretation based on a language model. In some embodiments, the system may further adapt the pre-trained mapping or another suitable mapping based on the detected error indication. For example, the system may modify a language model associated with the speech recognizer to implement the correct interpretation. The system having been configured to detect the error indication may be able to differentiate between a case when the user made an error (e.g., the user spoke the wrong word) and a case when the speech recognizer made an error (e.g., the user spoke the correct word, but the speech recognizer interpreted it incorrectly). For example, the user may speak the word “yike” instead of “like,” and the speech recognizer may interpret the word correctly as “yike.” In this case, the system may detect the error to be a user error. In another example, the user may speak the word “like,” but the speech recognizer may interpret the word incorrectly as “yike.” The system may leverage the capability to separately detect these two types of errors to improve further adaptation of the pre-trained mapping or another suitable mapping to the user.


The above-described embodiments can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. It should be appreciated that any component or collection of components that perform the functions described above can be generically considered as one or more controllers that control the above-discussed functions. The one or more controllers can be implemented in numerous ways, such as with dedicated hardware or with one or more processors programmed using microcode or software to perform the functions recited above.


In this respect, it should be appreciated that one implementation of the embodiments of the present invention comprises at least one non-transitory computer-readable storage medium (e.g., a computer memory, a portable memory, a compact disk, etc.) encoded with a computer program (i.e., a plurality of instructions), which, when executed on a processor, performs the above-discussed functions of the embodiments of the present invention. The computer-readable storage medium can be transportable such that the program stored thereon can be loaded onto any computer resource to implement the aspects of the present invention discussed herein. In addition, it should be appreciated that the reference to a computer program which, when executed, performs the above-discussed functions, is not limited to an application program running on a host computer. Rather, the term computer program is used herein in a generic sense to reference any type of computer code (e.g., software or microcode) that can be employed to program a processor to implement the above-discussed aspects of the present invention.


Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and are therefore not limited in their application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.


Also, embodiments of the invention may be implemented as one or more methods, of which an example has been provided. The acts performed as part of the method(s) may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts m illustrative embodiments.


Use of ordinal terms such as “first,” “second,” “third,” etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed. Such terms are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term).


The phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of “including,” “comprising,” “having,” “containing”, “involving”, and variations thereof, is meant to encompass the items listed thereafter and additional items.


Having described several embodiments of the invention in detail, various modifications and improvements will readily occur to those skilled in the art. Such modifications and improvements are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description is by way of example only, and is not intended as limiting. The invention is limited only as defined by the following claims and the equivalents thereto.

Claims
  • 1. A wearable device, comprising: one or more neuromuscular sensors configured to record a plurality of neuromuscular signals from a user donning the wearable device; andone or more processors programmed to: provide the plurality of neuromuscular signals or signals derived from the plurality of neuromuscular signals as an input to one or more trained statistical models;determine, based at least in part, on an output of the one or more trained statistical models, whether the user is holding a gesture, which includes evaluating two or more time points associated with the plurality of neuromuscular signals to determine whether the user is performing the gesture at each of the two or more time points;generate a first instruction for modifying an operation of an active speech recognizer while the active speech recognizer k active in response to determining that the user is holding the gesture; andgenerate a second instruction for modifying the operation of the active speech recognizer while the active speech recognizer is active in response to determining that the user is not holding the gesture.
  • 2. The wearable device of claim 1, further comprising at least one inertial measurement unit that is configured to record a movement of the user, wherein the input is a first input to the one or more trained statistical models and the one or more processors is further programmed to provide the movement of the user as a second input to the one or more trained statistical models.
  • 3. The wearable device of claim 1, wherein the gesture comprises contextual information associated with at least one of movements of the user or activation of muscles of the user.
  • 4. The wearable device of claim 3, wherein the speech recognizer interprets parts of speech provided to the speech recognizer by the user based on the contextual information.
  • 5. The wearable device of claim 1, wherein modifying the operation of the speech recognizer comprises changing an interaction mode of the speech recognizer.
  • 6. The wearable device of claim 5, wherein the interaction mode comprises at least one of a dictation mode, a spelling mode, an editing mode, or a navigation mode.
  • 7. The wearable device of claim 1, wherein the one or more processors is further programmed to output text in response to determining whether the user is holding the gesture.
  • 8. The wearable device of claim 1, wherein the one or more processors is further programmed to convert speech provided to the speech recognizer by the user to text.
  • 9. The wearable device of claim 8, wherein modifying the operation of the speech recognizer comprises instructing the speech recognizer to map the gesture to a linguistic token that is used to convert the speech to the text.
  • 10. The wearable device of claim 8, wherein the one or more processors is further programmed to correct the text converted from speech based on determining whether the user is holding the gesture.
  • 11. A method comprising: receiving a plurality of neuromuscular signals from a wearable device donned by a user;providing the plurality of neuromuscular signals or signals derived from the plurality of neuromuscular signals as an input to one or more trained statistical models;determining based at least in part, on an output of the one or more trained statistical models, whether the user is holding a gesture; which includes evaluating two or more time points associated with the plurality of neuromuscular signals to determine whether the user is performing the gesture at each of the two or more time points;relaying a first instruction for modifying an operation of an active speech recognizer while the active speech recognizer is active in response to determining that the user k holding the gesture; andrelaying a second instruction for modifying the operation of the active speech recognizer while the active speech recognizer is active in response to determining that the user is not holding the gesture.
  • 12. The method of claim 11, wherein: the input is a first input to the one or more trained statistical models; and the method further comprises: configuring at least one inertial measurement unit of the wearable device to record a movement of the user; and providing data representative of the movement of the user as a second input to the one or more trained statistical models.
  • 13. The method of claim 11, wherein the gesture comprises contextual information associated with at least one of movements of the user or activation of muscles of the user.
  • 14. The method of claim 13, wherein the speech recognizer interprets parts of speech provided to the speech recognizer by the user based on the contextual information.
  • 15. The method of claim 11, wherein modifying the operation of the speech recognizer comprises changing an interaction mode of the speech recognizer.
  • 16. The method of claim 15, wherein the interaction mode comprises at least one of a dictation mode, a spelling mode, an editing mode, or a navigation mode.
  • 17. The method of claim 11, further comprising outputting text in response to determining whether the user is holding the gesture.
  • 18. The method of claim 11, further comprising converting speech provided to the speech recognizer by the user to text.
  • 19. The method of claim 18, wherein modifying the operation of the speech recognizer comprises instructing the speech recognizer to map the gesture to a linguistic token that is used to convert the speech to the text.
  • 20. A non-transitory computer-readable medium encoded with instructions that, when executed by at least one computer processor performs a method of: receiving a plurality of neuromuscular signals from a wearable device donned by a user;providing the plurality of neuromuscular signals or signals derived from the plurality of neuromuscular signals as an input to one or more trained statistical models;determining based at least in part, on an output of the one or more trained statistical models, whether the user is holding a gesture, which includes evaluating two or more time points associated with the plurality of neuromuscular signals to determine whether the user is performing the gesture at each of the two or more time points;relaying a first instruction for modifying an operation of an active speech recognizer while the active speech recognizer is active in response to determining that the user is holding the gesture; andrelaying a second instruction for modifying the operation of the active speech recognizer while the active speech recognizer is active in response to determining that the user is not holding the gesture.
CROSS REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 15/974,384, titled “SYSTEMS AND METHODS FOR IMPROVED SPEECH RECOGNITION USING NEUROMUSCULAR INFORMATION,” filed on May 8, 2018, the disclosure of which is incorporated, in its entirety, by this reference.

US Referenced Citations (528)
Number Name Date Kind
1411995 Dull Apr 1922 A
3580243 Johnson et al. May 1971 A
3620208 Higley Nov 1971 A
3880146 Everett et al. Apr 1975 A
4055168 Miller et al. Oct 1977 A
4602639 Hoogendoorn et al. Jul 1986 A
4705408 Jordi Nov 1987 A
4817064 Milles Mar 1989 A
4896120 Kamil Jan 1990 A
5003978 Dunseath, Jr. Apr 1991 A
D322227 Warhol Dec 1991 S
5081852 Cox Jan 1992 A
5251189 Thorp Oct 1993 A
D348660 Parsons Jul 1994 S
5445869 Ishikawa et al. Aug 1995 A
5482051 Reddy et al. Jan 1996 A
5605059 Woodward Feb 1997 A
5625577 Kunii et al. Apr 1997 A
5683404 Johnson Nov 1997 A
6005548 Latypov et al. Dec 1999 A
6009210 Kang Dec 1999 A
6032530 Hock Mar 2000 A
6184847 Fateh et al. Feb 2001 B1
6238338 DeLuca et al. May 2001 B1
6244873 Hill et al. Jun 2001 B1
6377277 Yamamoto Apr 2002 B1
D459352 Giovanniello Jun 2002 S
6411843 Zarychta Jun 2002 B1
6487906 Hock Dec 2002 B1
6510333 Licata et al. Jan 2003 B1
6527711 Stivoric et al. Mar 2003 B1
6619836 Silvant et al. Sep 2003 B1
6658287 Litt et al. Dec 2003 B1
6720984 Jorgensen et al. Apr 2004 B1
6743982 Biegelsen et al. Jun 2004 B2
6774885 Even-Zohar Aug 2004 B1
6807438 Brun Del Re et al. Oct 2004 B1
D502661 Rapport Mar 2005 S
D502662 Rapport Mar 2005 S
6865409 Getsla et al. Mar 2005 B2
D503646 Rapport Apr 2005 S
6880364 Vidolin et al. Apr 2005 B1
6927343 Watanabe et al. Aug 2005 B2
6942621 Avinash et al. Sep 2005 B2
6965842 Rekimoto Nov 2005 B2
6972734 Ohshima et al. Dec 2005 B1
6984208 Zheng Jan 2006 B2
7022919 Brist et al. Apr 2006 B2
7086218 Pasach Aug 2006 B1
7089148 Bachmann et al. Aug 2006 B1
D535401 Travis et al. Jan 2007 S
7173437 Hervieux et al. Feb 2007 B2
7209114 Radley-Smith Apr 2007 B2
D543212 Marks May 2007 S
7265298 Maghribi et al. Sep 2007 B2
7271774 Puuri Sep 2007 B2
7333090 Tanaka et al. Feb 2008 B2
7351975 Brady et al. Apr 2008 B2
7450107 Radley-Smith Nov 2008 B2
7491892 Wanger et al. Feb 2009 B2
7517725 Reis Apr 2009 B2
7558622 Tran Jul 2009 B2
7574253 Edney et al. Aug 2009 B2
7580742 Tan et al. Aug 2009 B2
7596393 Jung et al. Sep 2009 B2
7618260 Daniel et al. Nov 2009 B2
7636549 Ma et al. Dec 2009 B2
7640007 Chen et al. Dec 2009 B2
7660126 Cho et al. Feb 2010 B2
7787946 Stahmann et al. Aug 2010 B2
7805386 Greer Sep 2010 B2
7809435 Ettare et al. Oct 2010 B1
7844310 Anderson Nov 2010 B2
7870211 Pascal et al. Jan 2011 B2
7901368 Flaherty et al. Mar 2011 B2
7925100 Howell et al. Apr 2011 B2
7948763 Chuang May 2011 B2
D643428 Janky et al. Aug 2011 S
D646192 Woode Oct 2011 S
8054061 Prance et al. Nov 2011 B2
D654622 Hsu Feb 2012 S
8170656 Tan et al. May 2012 B2
8179604 Prada Gomez et al. May 2012 B1
8188937 Amafuji et al. May 2012 B1
8190249 Gharieb et al. May 2012 B1
D661613 Demeglio Jun 2012 S
8203502 Chi et al. Jun 2012 B1
8207473 Axisa et al. Jun 2012 B2
8212859 Tang et al. Jul 2012 B2
8311623 Sanger Nov 2012 B2
8351651 Lee Jan 2013 B2
8355671 Kramer et al. Jan 2013 B2
8389862 Arora et al. Apr 2013 B2
8421634 Tan et al. Apr 2013 B2
8427977 Workman et al. Apr 2013 B2
D682727 Bulgari May 2013 S
8435191 Barboutis et al. May 2013 B2
8437844 Syed Momen et al. May 2013 B2
8447704 Tan et al. May 2013 B2
8467270 Gossweiler, III et al. Jun 2013 B2
8469741 Oster et al. Jun 2013 B2
8484022 Vanhoucke Jul 2013 B1
D689862 Liu Sep 2013 S
8591411 Banet et al. Nov 2013 B2
D695454 Moore Dec 2013 S
8620361 Bailey et al. Dec 2013 B2
8624124 Koo et al. Jan 2014 B2
8702629 Giuffrida et al. Apr 2014 B2
8704882 Turner Apr 2014 B2
8718980 Garudadri et al. May 2014 B2
8744543 Li et al. Jun 2014 B2
8754862 Zaliva Jun 2014 B2
8777668 Ikeda et al. Jul 2014 B2
D716457 Brefka et al. Oct 2014 S
D717685 Bailey et al. Nov 2014 S
8879276 Wang Nov 2014 B2
8880163 Barachant et al. Nov 2014 B2
8883287 Boyce et al. Nov 2014 B2
8890875 Jammes et al. Nov 2014 B2
8892479 Tan et al. Nov 2014 B2
8895865 Lenahan et al. Nov 2014 B2
8912094 Koo et al. Dec 2014 B2
8922481 Kauffman et al. Dec 2014 B1
8970571 Wong et al. Mar 2015 B1
8971023 Olsson et al. Mar 2015 B2
9018532 Wesselmann et al. Apr 2015 B2
9037530 Tan et al. May 2015 B2
9086687 Park et al. Jul 2015 B2
9092664 Forutanpour et al. Jul 2015 B2
D736664 Paradise et al. Aug 2015 S
9146730 Lazar Sep 2015 B2
D741855 Park et al. Oct 2015 S
9170674 Forutanpour et al. Oct 2015 B2
D742272 Bailey et al. Nov 2015 S
D742874 Cheng et al. Nov 2015 S
D743963 Osterhout Nov 2015 S
9182826 Powledge et al. Nov 2015 B2
9211417 Heldman et al. Dec 2015 B2
9218574 Phillipps et al. Dec 2015 B2
D747714 Erbeus Jan 2016 S
9235934 Mandella et al. Jan 2016 B2
9240069 Li Jan 2016 B1
D750623 Park et al. Mar 2016 S
D751065 Magi Mar 2016 S
9278453 Assad Mar 2016 B2
9299248 Lake et al. Mar 2016 B2
D756359 Bailey et al. May 2016 S
9351653 Harrison May 2016 B1
9367139 Ataee et al. Jun 2016 B2
9372535 Bailey et al. Jun 2016 B2
9389694 Ataee et al. Jul 2016 B2
9393418 Giuffrida et al. Jul 2016 B2
9408316 Bailey et al. Aug 2016 B2
9418927 Axisa et al. Aug 2016 B2
9439566 Arne et al. Sep 2016 B2
9459697 Bedikian et al. Oct 2016 B2
9472956 Michaelis et al. Oct 2016 B2
9477313 Mistry et al. Oct 2016 B2
9483123 Aleem et al. Nov 2016 B2
9529434 Choi et al. Dec 2016 B2
9597015 McNames et al. Mar 2017 B2
9600030 Bailey et al. Mar 2017 B2
9612661 Wagner et al. Apr 2017 B2
9613262 Holz Apr 2017 B2
9654477 Kotamraju May 2017 B1
9659403 Horowitz May 2017 B1
9687168 John Jun 2017 B2
9696795 Marcolina et al. Jul 2017 B2
9720515 Wagner et al. Aug 2017 B2
9741169 Holz Aug 2017 B1
9766709 Holz Sep 2017 B2
9785247 Horowitz et al. Oct 2017 B1
9788789 Bailey Oct 2017 B2
9864431 Keskin et al. Jan 2018 B2
9867548 Le et al. Jan 2018 B2
9880632 Ataee et al. Jan 2018 B2
9891718 Connor Feb 2018 B2
10042422 Morun et al. Aug 2018 B2
10070799 Ang et al. Sep 2018 B2
10078435 Noel Sep 2018 B2
10101809 Morun et al. Oct 2018 B2
10152082 Bailey Dec 2018 B2
10188309 Morun et al. Jan 2019 B2
10199008 Aleem et al. Feb 2019 B2
10203751 Keskin et al. Feb 2019 B2
10216274 Chapeskie et al. Feb 2019 B2
10251577 Morun et al. Apr 2019 B2
10310601 Morun et al. Jun 2019 B2
10331210 Morun et al. Jun 2019 B2
10362958 Morun et al. Jul 2019 B2
10409371 Kaifosh et al. Sep 2019 B2
10437335 Daniels Oct 2019 B2
10460455 Giurgica-Tiron et al. Oct 2019 B2
10489986 Kaifosh et al. Nov 2019 B2
10496168 Kaifosh et al. Dec 2019 B2
10504286 Kaifosh et al. Dec 2019 B2
10592001 Berenzweig Mar 2020 B2
20020032386 Snackner et al. Mar 2002 A1
20020077534 DuRousseau Jun 2002 A1
20020094701 Biegelsen et al. Jul 2002 A1
20030036691 Stanaland et al. Feb 2003 A1
20030051505 Robertson et al. Mar 2003 A1
20030144586 Tsubata Jul 2003 A1
20030144829 Geatz et al. Jul 2003 A1
20030171921 Manabe et al. Sep 2003 A1
20030184544 Prudent Oct 2003 A1
20040054273 Finneran et al. Mar 2004 A1
20040068409 Tanaka et al. Apr 2004 A1
20040073104 Brun del Re et al. Apr 2004 A1
20040092839 Shin et al. May 2004 A1
20040194500 Rapport Oct 2004 A1
20040210165 Marmaropoulos et al. Oct 2004 A1
20040243342 Rekimoto Dec 2004 A1
20050005637 Rapport Jan 2005 A1
20050012715 Ford Jan 2005 A1
20050070227 Shen et al. Mar 2005 A1
20050119701 Lauter et al. Jun 2005 A1
20050177038 Kolpin et al. Aug 2005 A1
20060037359 Stinespring Feb 2006 A1
20060061544 Min et al. Mar 2006 A1
20060121958 Fung et al. Jun 2006 A1
20060129057 Maekawa et al. Jun 2006 A1
20070009151 Pittman et al. Jan 2007 A1
20070016265 Davoodi et al. Jan 2007 A1
20070132785 Ebersole, Jr. et al. Jun 2007 A1
20070172797 Hada et al. Jul 2007 A1
20070177770 Derchak et al. Aug 2007 A1
20070256494 Nakamura et al. Nov 2007 A1
20070285399 Lund Dec 2007 A1
20080051673 Kong et al. Feb 2008 A1
20080052643 Ike et al. Feb 2008 A1
20080103639 Troy et al. May 2008 A1
20080103769 Schultz et al. May 2008 A1
20080136775 Conant Jun 2008 A1
20080214360 Stiding et al. Sep 2008 A1
20080221487 Zahar et al. Sep 2008 A1
20080262772 Luinge et al. Oct 2008 A1
20090007597 Hanevold Jan 2009 A1
20090027337 Hildreth Jan 2009 A1
20090031757 Harding Feb 2009 A1
20090040016 Ikeda Feb 2009 A1
20090051544 Niknejad Feb 2009 A1
20090079813 Hildreth Mar 2009 A1
20090082692 Hale et al. Mar 2009 A1
20090082701 Zohar et al. Mar 2009 A1
20090102580 Uchaykin Apr 2009 A1
20090112080 Matthews Apr 2009 A1
20090124881 Rytky May 2009 A1
20090189867 Krah et al. Jul 2009 A1
20090251407 Flake et al. Oct 2009 A1
20090318785 Ishikawa et al. Dec 2009 A1
20090326406 Tan et al. Dec 2009 A1
20090327171 Tan et al. Dec 2009 A1
20100030532 Arora et al. Feb 2010 A1
20100041974 Ting et al. Feb 2010 A1
20100063794 Hernandez-Rebollar Mar 2010 A1
20100106044 Linderman Apr 2010 A1
20100113910 Brauers et al. May 2010 A1
20100228487 Luethardt et al. Sep 2010 A1
20100249635 Van Der Reijden Sep 2010 A1
20100280628 Sankai Nov 2010 A1
20100292595 Paul Nov 2010 A1
20100292606 Prakash et al. Nov 2010 A1
20100292617 Lei et al. Nov 2010 A1
20100293115 Seyed Momen Nov 2010 A1
20100315266 Gunawardana et al. Dec 2010 A1
20100317958 Bech et al. Dec 2010 A1
20110018754 Tojima et al. Jan 2011 A1
20110077484 Van Slyke et al. Mar 2011 A1
20110092826 Lee et al. Apr 2011 A1
20110134026 Kang et al. Jun 2011 A1
20110151974 Deaguero Jun 2011 A1
20110166434 Gargiulo Jul 2011 A1
20110172503 Knepper et al. Jul 2011 A1
20110173204 Murillo et al. Jul 2011 A1
20110173574 Clavin et al. Jul 2011 A1
20110213278 Horak et al. Sep 2011 A1
20110224556 Moon et al. Sep 2011 A1
20110224564 Moon et al. Sep 2011 A1
20110230782 Bartol et al. Sep 2011 A1
20110248914 Sherr Oct 2011 A1
20110313762 Ben-David et al. Dec 2011 A1
20120029322 Wartena et al. Feb 2012 A1
20120051005 Vanfleteren et al. Mar 2012 A1
20120066163 Balls et al. Mar 2012 A1
20120101357 Hoskuldsson et al. Apr 2012 A1
20120157789 Kangas et al. Jun 2012 A1
20120165695 Kidmose et al. Jun 2012 A1
20120188158 Tan et al. Jul 2012 A1
20120203076 Fatta et al. Aug 2012 A1
20120209134 Morita et al. Aug 2012 A1
20120265090 Fink et al. Oct 2012 A1
20120265480 Oshima Oct 2012 A1
20120283526 Gommesen et al. Nov 2012 A1
20120293548 Perez et al. Nov 2012 A1
20120302858 Kidmose et al. Nov 2012 A1
20120323521 De Foras et al. Dec 2012 A1
20130004033 Trugenberger Jan 2013 A1
20130005303 Song et al. Jan 2013 A1
20130020948 Han et al. Jan 2013 A1
20130027341 Mastandrea Jan 2013 A1
20130077820 Marais et al. Mar 2013 A1
20130080794 Hsieh Mar 2013 A1
20130123656 Heck May 2013 A1
20130127708 Jung et al. May 2013 A1
20130135223 Shai May 2013 A1
20130141375 Ludwig et al. Jun 2013 A1
20130165813 Chang et al. Jun 2013 A1
20130191741 Dickinson et al. Jul 2013 A1
20130198694 Rahman et al. Aug 2013 A1
20130207889 Chang et al. Aug 2013 A1
20130217998 Mahfouz et al. Aug 2013 A1
20130232095 Tan Sep 2013 A1
20130265229 Forutanpour et al. Oct 2013 A1
20130265437 Thorn et al. Oct 2013 A1
20130271292 McDermott Oct 2013 A1
20130312256 Wesselmann et al. Nov 2013 A1
20130317382 Le Nov 2013 A1
20130317648 Assad Nov 2013 A1
20130332196 Pinsker Dec 2013 A1
20140020945 Hurwitz et al. Jan 2014 A1
20140028546 Jeon et al. Jan 2014 A1
20140045547 Singamsetty et al. Feb 2014 A1
20140049417 Abdurrahman et al. Feb 2014 A1
20140052150 Taylor et al. Feb 2014 A1
20140121471 Walker Mar 2014 A1
20140122958 Greenebrg et al. Mar 2014 A1
20140092009 Yen et al. Apr 2014 A1
20140094675 Luna et al. Apr 2014 A1
20140098018 Kim et al. Apr 2014 A1
20140107493 Yuen et al. Apr 2014 A1
20140142937 Powledge May 2014 A1
20140194062 Palin et al. Jul 2014 A1
20140196131 Lee Jul 2014 A1
20140198034 Bailey et al. Jul 2014 A1
20140198035 Bailey et al. Jul 2014 A1
20140198944 Forutanpour et al. Jul 2014 A1
20140223462 Aimone et al. Aug 2014 A1
20140236031 Banet et al. Aug 2014 A1
20140240103 Lake et al. Aug 2014 A1
20140240223 Lake et al. Aug 2014 A1
20140245200 Holz Aug 2014 A1
20140249397 Lake et al. Sep 2014 A1
20140257141 Giuffrida et al. Sep 2014 A1
20140277622 Raniere Sep 2014 A1
20140278441 Ton et al. Sep 2014 A1
20140285326 Luna et al. Sep 2014 A1
20140297528 Agrawal Oct 2014 A1
20140299362 Park et al. Oct 2014 A1
20140304665 Holz Oct 2014 A1
20140310595 Acharya et al. Oct 2014 A1
20140330404 Abdelghani et al. Nov 2014 A1
20140334083 Bailey Nov 2014 A1
20140334653 Luna et al. Nov 2014 A1
20140337861 Chang et al. Nov 2014 A1
20140340857 Hsu et al. Nov 2014 A1
20140344731 Holz Nov 2014 A1
20140349257 Connor Nov 2014 A1
20140354528 Laughlin et al. Dec 2014 A1
20140354529 Laughlin et al. Dec 2014 A1
20140355825 Kim et al. Dec 2014 A1
20140358024 Nelson et al. Dec 2014 A1
20140361988 Katz et al. Dec 2014 A1
20140364703 Kim et al. Dec 2014 A1
20140365163 Jallon Dec 2014 A1
20140375465 Fenuccio et al. Dec 2014 A1
20140376773 Holz Dec 2014 A1
20150006120 Sett et al. Jan 2015 A1
20150010203 Muninder et al. Jan 2015 A1
20150011857 Henson et al. Jan 2015 A1
20150025355 Bailey et al. Jan 2015 A1
20150029092 Holz et al. Jan 2015 A1
20150035827 Yamaoka et al. Feb 2015 A1
20150045689 Barone Feb 2015 A1
20150045699 Mokaya et al. Feb 2015 A1
20150051470 Bailey et al. Feb 2015 A1
20150057506 Luna et al. Feb 2015 A1
20150057770 Bailey et al. Feb 2015 A1
20150065840 Bailey Mar 2015 A1
20150070270 Bailey et al. Mar 2015 A1
20150070274 Morozov Mar 2015 A1
20150084860 Aleem et al. Mar 2015 A1
20150094564 Tashman et al. Apr 2015 A1
20150106052 Balakrishnan et al. Apr 2015 A1
20150109202 Ataee et al. Apr 2015 A1
20150124566 Lake et al. May 2015 A1
20150128094 Baldwin et al. May 2015 A1
20150141784 Morun et al. May 2015 A1
20150148641 Morun et al. May 2015 A1
20150157944 Gottlieb Jun 2015 A1
20150160621 Yilmaz Jun 2015 A1
20150169074 Ataee et al. Jun 2015 A1
20150182113 Utter, II Jul 2015 A1
20150182130 Utter, II Jul 2015 A1
20150182160 Kim et al. Jul 2015 A1
20150182163 Utter Jul 2015 A1
20150182164 Utter, II Jul 2015 A1
20150182165 Miller et al. Jul 2015 A1
20150186609 Utter, II Jul 2015 A1
20150193949 Katz et al. Jul 2015 A1
20150216475 Luna et al. Aug 2015 A1
20150220152 Tait et al. Aug 2015 A1
20150223716 Korkala et al. Aug 2015 A1
20150230756 Luna et al. Aug 2015 A1
20150234426 Bailey et al. Aug 2015 A1
20150237716 Su et al. Aug 2015 A1
20150261306 Lake Sep 2015 A1
20150261318 Scavezze et al. Sep 2015 A1
20150277575 Ataee et al. Oct 2015 A1
20150296553 DiFranco et al. Oct 2015 A1
20150302168 De Sapio et al. Oct 2015 A1
20150309563 Connor Oct 2015 A1
20150309582 Gupta Oct 2015 A1
20150312175 Langholz Oct 2015 A1
20150313496 Connor Nov 2015 A1
20150325202 Lake Nov 2015 A1
20150332013 Lee et al. Nov 2015 A1
20150346701 Gordon et al. Dec 2015 A1
20150366504 Connor Dec 2015 A1
20150370326 Chapeskie et al. Dec 2015 A1
20150370333 Ataee et al. Dec 2015 A1
20160011668 Gilad-Bachrach et al. Jan 2016 A1
20160020500 Matsuda Jan 2016 A1
20160026853 Wexler et al. Jan 2016 A1
20160049073 Lee Feb 2016 A1
20160092504 Mitri et al. Mar 2016 A1
20160144172 Hsueh et al. May 2016 A1
20160150636 Otsubo May 2016 A1
20160156762 Bailey et al. Jun 2016 A1
20160162604 Xioli et al. Jun 2016 A1
20160187992 Yamamoto et al. Jun 2016 A1
20160199699 Klassen Jul 2016 A1
20160202081 Debieuvre et al. Jul 2016 A1
20160207201 Herr et al. Jul 2016 A1
20160235323 Tadi et al. Aug 2016 A1
20160239080 Marcolina et al. Aug 2016 A1
20160259407 Schick Sep 2016 A1
20160262687 Imperial Sep 2016 A1
20160274758 Bailey Sep 2016 A1
20160275726 Mullins Sep 2016 A1
20160292497 Kehtarnavaz et al. Oct 2016 A1
20160309249 Wu et al. Oct 2016 A1
20160313798 Connor Oct 2016 A1
20160313801 Wagner et al. Oct 2016 A1
20160313890 Walline et al. Oct 2016 A1
20160313899 Noel Oct 2016 A1
20160350973 Shapira et al. Dec 2016 A1
20170031502 Rosenberg et al. Feb 2017 A1
20170035313 Hong et al. Feb 2017 A1
20170061817 Mettler May Mar 2017 A1
20170068445 Lee et al. Mar 2017 A1
20170080346 Abbas Mar 2017 A1
20170090604 Barbier Mar 2017 A1
20170091567 Wang et al. Mar 2017 A1
20170119472 Herrmann et al. May 2017 A1
20170123487 Hazra et al. May 2017 A1
20170124816 Yang et al. May 2017 A1
20170161635 Oono et al. Jun 2017 A1
20170188878 Lee Jul 2017 A1
20170188980 Ash Jul 2017 A1
20170197142 Stafford et al. Jul 2017 A1
20170259167 Cook et al. Sep 2017 A1
20170262064 Ofir et al. Sep 2017 A1
20170285756 Wang et al. Oct 2017 A1
20170285848 Rosenberg et al. Oct 2017 A1
20170296363 Yetkin et al. Oct 2017 A1
20170301630 Nguyen et al. Oct 2017 A1
20170308118 Ito Oct 2017 A1
20170344706 Torres et al. Nov 2017 A1
20170347908 Watanabe et al. Dec 2017 A1
20180000367 Longinotti-Buitoni Jan 2018 A1
20180020951 Kaifosh et al. Jan 2018 A1
20180020978 Kaifosh et al. Jan 2018 A1
20180024634 Kaifosh et al. Jan 2018 A1
20180024635 Kaifosh et al. Jan 2018 A1
20180024641 Mao et al. Jan 2018 A1
20180064363 Morun et al. Mar 2018 A1
20180067553 Morun et al. Mar 2018 A1
20180081439 Daniels Mar 2018 A1
20180088765 Bailey Mar 2018 A1
20180092599 Kerth et al. Apr 2018 A1
20180095630 Bailey Apr 2018 A1
20180101235 Bodensteiner et al. Apr 2018 A1
20180101289 Bailey Apr 2018 A1
20180120948 Aleem et al. May 2018 A1
20180140441 Poirters May 2018 A1
20180150033 Lake et al. May 2018 A1
20180153430 Ang et al. Jun 2018 A1
20180153444 Yang et al. Jun 2018 A1
20180154140 Bouton et al. Jun 2018 A1
20180178008 Bouton et al. Jun 2018 A1
20180240459 Weng et al. Aug 2018 A1
20180301057 Hargrove et al. Oct 2018 A1
20180307314 Connor Oct 2018 A1
20180321745 Morun et al. Nov 2018 A1
20180321746 Morun et al. Nov 2018 A1
20180333575 Bouton Nov 2018 A1
20180344195 Morun et al. Dec 2018 A1
20180360379 Harrison et al. Dec 2018 A1
20190008453 Spoof Jan 2019 A1
20190025919 Tadi et al. Jan 2019 A1
20190033967 Morun et al. Jan 2019 A1
20190033974 Mu et al. Jan 2019 A1
20190038166 Tavabi et al. Feb 2019 A1
20190076716 Chiou et al. Mar 2019 A1
20190121305 Kaifosh et al. Apr 2019 A1
20190121306 Kaifosh et al. Apr 2019 A1
20190146809 Lee et al. May 2019 A1
20190150777 Guo et al. May 2019 A1
20190192037 Morun et al. Jun 2019 A1
20190212817 Kaifosh et al. Jul 2019 A1
20190223748 Al-natsheh et al. Jul 2019 A1
20190227627 Kaifosh et al. Jul 2019 A1
20190228330 Kaifosh et al. Jul 2019 A1
20190228533 Giurgica-Tiron et al. Jul 2019 A1
20190228579 Kaifosh et al. Jul 2019 A1
20190228590 Kaifosh et al. Jul 2019 A1
20190228591 Giurgica-Tiron et al. Jul 2019 A1
20190247650 Tran Aug 2019 A1
20190324549 Araki et al. Oct 2019 A1
20190348026 Berenzweig et al. Nov 2019 A1
20190348027 Berenzweig et al. Nov 2019 A1
20190357787 Barachant et al. Nov 2019 A1
20190362557 Lacey et al. Nov 2019 A1
20200069210 Berenzweig et al. Mar 2020 A1
20200069211 Berenzweig et al. Mar 2020 A1
20200073483 Berenzweig et al. Mar 2020 A1
20200097081 Stone et al. Mar 2020 A1
Foreign Referenced Citations (51)
Number Date Country
2 902 045 Aug 2014 CA
2 921 954 Feb 2015 CA
2 939 644 Aug 2015 CA
1838933 Sep 2006 CN
103777752 May 2014 CN
105190578 Dec 2015 CN
106102504 Nov 2016 CN
110300542 Oct 2019 CN
44 12 278 Oct 1995 DE
0 301 790 Feb 1989 EP
2 198 521 Jun 2012 EP
2 959 394 Dec 2015 EP
3 104 737 Dec 2016 EP
3 487 395 May 2019 EP
H05-277080 Oct 1993 JP
2005-095561 Apr 2005 JP
2009-50679 Mar 2009 JP
2010-520561 Jun 2010 JP
2016-507851 Mar 2016 JP
2017-509386 Apr 2017 JP
10-2012-0094870 Aug 2012 KR
10-2012-0097997 Sep 2012 KR
10-2015-0123254 Nov 2015 KR
10-2016-0121552 Oct 2016 KR
10-2017-0107283 Sep 2017 KR
10-1790147 Oct 2017 KR
2008109248 Sep 2008 WO
2009042313 Apr 2009 WO
2010104879 Sep 2010 WO
2011070554 Jun 2011 WO
2012155157 Nov 2012 WO
2014130871 Aug 2014 WO
2014186370 Nov 2014 WO
2014194257 Dec 2014 WO
2014197443 Dec 2014 WO
2015027089 Feb 2015 WO
2015073713 May 2015 WO
2015081113 Jun 2015 WO
2015123445 Aug 2015 WO
2015199747 Dec 2015 WO
2016041088 Mar 2016 WO
2017062544 Apr 2017 WO
2017092225 Jun 2017 WO
2017120669 Jul 2017 WO
2017172185 Oct 2017 WO
2017208167 Dec 2017 WO
2018022602 Feb 2018 WO
2019099758 May 2019 WO
2019217419 Nov 2019 WO
2020047429 Mar 2020 WO
2020061440 Mar 2020 WO
Non-Patent Literature Citations (136)
Entry
Costanza et al., “EMG as a Subtle Input Interface for Mobile Computing”, Mobile HCI 2004, LNCS 3160, edited by S. Brewster and M. Dunlop, Springer-Verlag Berlin Heidelberg, pp. 426-430, 2004.
Costanza et al., “Toward Subtle Intimate Interfaces for Mobile Devices Using an EMG Controller”, CHI 2005, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 481-489, 2005.
Ghasemzadeh et al., “A Body Sensor Network With Electromyogram and Inertial Sensors: Multimodal Interpretation of Muscular Activities”, IEEE Transactions on Information Technology in Biomedicine, vol. 14, No. 2, pp. 198-206, Mar. 2010.
Gourmelon et al., “Contactless sensors for Surface Electromyography”, Proceedings of the 28th IEEE EMBS Annual International Conference, New York City, NY, Aug. 30-Sep. 3, 2006, pp. 2514-2517.
International Search Report and Written Opinion, dated May 16, 2014, for corresponding International Application No. PCT/US2014/017799, 9 pages.
International Search Report and Written Opinion, dated Aug. 21, 2014, for corresponding International Application No. PCT/US2014/037863, 10 pages.
International Search Report and Written Opinion, dated Nov. 21, 2014, for corresponding International Application No. PCT/US2014/052143, 9 pages.
International Search Report and Written Opinion, dated Feb. 27, 2015, for corresponding International Application No. PCT/US2014/067443, 10 pages.
International Search Report and Written Opinion, dated May 27, 2015, for corresponding International Application No. PCT/US2015/015675, 9 pages.
Morris et al., “Emerging Input Technologies for Always-Available Mobile Interaction”, Foundations and Trends in Human-Computer Interaction 4(4):245-316, 2010. (74 total pages).
Naik et al., “Real-Time Hand Gesture Identification for Human Computer Interaction Based on ICA of Surface Electromyogram”, IADIS International Conference Interfaces and Human Computer Interaction, 2007, 8 pages.
Picard et al., “Affective Wearables”, Proceedings of the IEEE 1st International Symposium on Wearable Computers, ISWC, Cambridge, MA, USA, Oct. 13-14, 1997, pp. 90-97.
Rekimoto, “Gesture Wrist and GesturePad: Unobtrusive Wearable Interaction Devices”, ISWC '01 Proceedings of the 5th IEEE International Symposium on Wearable Computers, 2001, 7 pages.
Saponas et al., “Making Muscle-Computer Interfaces More Practical”, CHI 2010, Atlanta, Georgia, USA, Apr. 10-15, 2010, 4 pages.
Sato et al., “Touche: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects”, CHI' 12, May 5-10, 2012, Austin, Texas.
Ueno et al., “A Capacitive Sensor System for Measuring Laplacian Electromyogram through Cloth: A Pilot Study”, Proceedings of the 29th Annual International Conference of the IEEE EMBS, Cite Internationale, Lyon, France, Aug. 23-26, 2007.
Ueno et al., “Feasibility of Capacitive Sensing of Surface Electromyographic Potential through Cloth”, Sensors and Materials 24(6):335-346, 2012.
Xiong et al., “A Novel HCI based on EMG and IMU”, Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Phuket, Thailand, Dec. 7-11, 2011, 5 pages.
Zhang et aL, “A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors”, IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 41, No. 6, pp. 1064-1076, Nov. 2011.
Non-Final Office Action received for U.S. Appl. No. 14/505,836 dated Jun. 30, 2016, 37 pages.
Xu et al., “Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG Sensors”, Proceedings of the 14th international conference on Intelligent user interfaces, Sanibel Island, Florida, Feb. 8-11, 2009, pp. 401-406.
Communication pursuant to Rule 164(1) EPC, dated Sep. 30, 2016, for corresponding EP Application No. 14753949.8, 7 pages.
Non-Final Office Action received for U.S. Appl. No. 14/505,836 dated Feb. 23, 2017, 54 pages.
Brownlee, “Finite State Machines (FSM): Finite state machines as a control technique in Artificial Intelligence (AI)”, Jun. 2002, 12 pages.
Final Office Action received for U.S. Appl. No. 14/505,836 dated Jul. 28, 2017, 52 pages.
Non-Final Office Action received for U.S. Appl. No. 16/557,342 dated Oct. 22, 2019, 16 pages.
International Preliminary Report on Patentability received for PCT Application Serial No. PCT/US2014/017799 dated Sep. 3, 2015, 8 pages.
International Preliminary Report on Patentability received for PCT Application Serial No. PCT/US2014/037863 dated Nov. 26, 2015, 8 pages.
International Preliminary Report on Patentability received for PCT Application Serial No. PCT/US2014/052143 dated Mar. 3, 2016, 7 pages.
International Preliminary Report on Patentability received for PCT Application Serial No. PCT/US2014/067443 dated Jun. 9, 2016, 7 pages.
International Preliminary Report on Patentability received for PCT Application Serial No. PCT/US2015/015675 dated Aug. 25, 2016, 8 pages.
International Search Report and Written Opinion received for PCT Application Serial No. PCT/US2019/049094 dated Jan. 9, 2020, 27 pages.
Corazza et al., “A Markerless Motion Capture System to Study Musculoskeletal Biomechanics: Visual Hull and Simulated Annealing Approach”, Annals of Biomedical Engineering, vol. 34, No. 6, Jul. 2006, pp. 1019-1029.
Non-Final Office Action received for U.S. Appl. No. 15/659,072 dated Apr. 30, 2019, 99 pages.
Final Office Action received for U.S. Appl. No. 15/659,072 dated Nov. 29, 2019, 36 pages.
Non-Final Office Action received for U.S. Appl. No. 16/353,998 dated May 24, 2019, 20 pages.
Final Office Action received for U.S. Appl. No. 16/557,342 dated Jan. 28, 2020, 15 pages.
Non-Final Office Action received for U.S. Appl. No. 16/557,383 dated Dec. 23, 2019, 53 pages.
Non-Final Office Action received for U.S. Appl. No. 16/557,427 dated Dec. 23, 2019, 52 pages.
Non-Final Office Action received for U.S. Appl. No. 15/816,435 dated Jan. 22, 2020, 35 pages.
Non-Final Office Action received for U.S. Appl. No. 16/577,207 dated Nov. 19, 2019, 32 pages.
Final Office Action received for U.S. Appl. No. 16/577,207 dated Feb. 4, 2020, 76 pages.
Non-Final Office Action received for U.S. Appl. No. 15/974,430 dated May 16, 2019, 12 pages.
Final Office Action received for U.S. Appl. No. 15/974,430 dated Dec. 11, 2019, 30 pages.
Non-Final Office Action received for U.S. Appl. No. 15/974,384 dated May 16, 2019, 13 pages.
Notice of Allowance received for U.S. Appl. No. 15/974,384 dated Nov. 4, 2019, 39 pages.
Final Office Action received for U.S. Appl. No. 16/353,998 dated Nov. 29, 2019, 33 pages.
Non-Final Office Action received for U.S. Appl. No. 15/974,454 dated Dec. 20, 2019, 41 pages.
Final Office Action received for U.S. Appl. No. 15/974,454 dated Apr. 9, 2020, 19 pages.
International Search Report and Written Opinion for International Application No. PCT/US2017/043693 dated Oct. 6, 2017.
International Preliminary Report on Patentability for International Application No. PCT/US2017/043693 dated Feb. 7, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2017/043686 dated Oct. 6, 2017.
International Preliminary Report on Patentability for International Application No. PCT/US2017/043686 dated Feb. 7, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2017/043791 dated Oct. 5, 2017.
International Preliminary Report on Patentability for International Application No. PCT/US2017/043791 dated Feb. 7, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2017/043792 dated Oct. 5, 2017.
International Preliminary Report on Patentability for International Application No. PCT/US2017/043792 dated Feb. 7, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2018/056768 dated Jan. 15, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2018/061409 dated Mar. 12, 2019.
Benko et al., Enhancing Input on and Above the Interactive Surface with Muscle Sensing. The ACM International Conference on Interactive Tabletops and Surfaces. ITS '09.2009:93-100.
Boyali et al., Spectral Collaborative Representation based Classification for hand gestures recognition on electromyography signals. Biomedical Signal Processing and Control. 2016;24: 11-18.
Cheng et al., A Novel Phonology- and Radical-Coded Chinese Sign Language Recognition Framework Using Accelerometer and Surface Electromyography Sensors. Sensors. 2015;15:23303-24.
Csapo et al., Evaluation of Human-Myo Gesture Control Capabilities in Continuous Search and Select Operations. 7th IEEE International Conference on Cognitive Infocommunications. 2016;000415-20.
Delis et al., Development of a Myoelectric Controller Based on Knee Angle Estimation. Biodevices 2009. International Conference on Biomedical Electronics and Devices. Jan. 17, 2009. 7 pages.
Diener et al., Direct conversion from facial myoelectric signals to speech using Deep Neural Networks. 2015 International Joint Conference on Neural Networks (IJCNN). Oct. 1, 2015. 7 pages.
Ding et al., HMM with improved feature extraction-based feature parameters for identity recognition of gesture command operators by using a sensed Kinect-data stream. Neurocomputing. 2017;262: 108-19.
Farina et al., Man/machine interface based on the discharge timings of spinal motor neurons after targeted muscle reinnervation. Nature. Biomedical Engineering. 2017;1: 1-12.
Gallina et al., Surface EMG Biofeedback. Surface Electromyography: Physiology, Engineering, and Applications. 2016:485-500.
Jiang, Purdue University Graduate School Thesis/Dissertation Acceptance. Graduate School Form 30. Updated Jan. 15, 2015. 24 pages.
Kawaguchi et al., Estimation of Finger Joint Angles Based on Electromechanical Sensing of Wrist Shape. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2017;25(9): 1409-18.
Kim et al., Real-Time Human Pose Estimation and Gesture Recognition from Depth Images Using Superpixels and SVM Classifier. Sensors. 2015;15:12410-27.
Koerner, Design and Characterization of the Exo-Skin Haptic Device: a Novel Tendon Actuated Textile Hand Exoskeleton. 2017. 5 pages.
Li et al., Motor Function Evaluation of Hemiplegic Upper-Extremities Using Data Fusion from Wearable Inertial and Surface EMG Sensors. Sensors. MDPI. 2017;17(582): 1-17.
Mcintee, A Task Model of Free-Space Movement-Based Geastures. Dissertation. Graduate Faculty of North aarolina State University. Computer Science. 2016. 129 pages.
Naik et al., Source Separation and Identification issues in bio signals: A solution using Blind source seperation. Intech. 2009. 23 pages.
Naik et al., Subtle Hand gesture identification for HCI using Temporal Decorrelation Source Separation Bss of surface EMG. Digital Image Computing Techniques and Applications. IEEE Computer Society. 2007;30-7.
Negro et al., Multi-channel intramuscular and surface EMG decomposition by convolutive blind source separation. Journal of Neural Engineering. 2016;13: 1-17.
Saponas et al., Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces. CHI 2008 Proceedings. Physiological Sensing for Input. 2008:515-24.
Saponas et al., Enabling Always-Available Input with Muscle-Computer Interfaces. UIST '09. 2009:167-76.
Sauras-Perez et al., A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars. Clemson University. All Dissertations. 2017. 174 pages.
Shen et al., I am a Smartwatch and I can Track my User's Arm. University of Illinois at Urbana-Champaign. MobiSys' 16. 12 pages.
Son et al., Evaluating the utility of two gestural discomfort evaluation methods. PLOS One. 2017. 21 pages.
Strbac et al., Microsoft Kinect-Based Artificial Perception System for Control of Functional Electrical Stimulation Assisted Grasping. Hindawi Publishing Corporation. BioMed Research International. 2014. 13 pages.
Torres, Myo Gesture Control Armband. PCMag. Https://www.pcmag.com/article2/0,2817,2485462,00.asp 2015. 9 pages.
Wodzinski et al., Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control. Metrol. Meas. Syst., 2017;24(2)265-76.
Xue et al., Multiple Sensors Based Hand Motion Recognition Using Adaptive Directed Acyclic Graph. Applied Sciences. MDPI. 2017;7(358)1-14.
International Search Report and Written Opinion for International Application No. PCT/US2018/063215 dated Mar. 21,2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015134 dated May 15, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015167 dated May 21, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015174 dated May 21, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015238 dated May 16, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015183 dated May 3, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015180 dated May 16, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/015244 dated May 16, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/020065 dated May 16, 2019.
Arkenbout et al., Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements. Sensors. 2015;15:31644-71.
Davoodi et al., Development of a Physics-Based Target Shooting Game to Train Amputee Users of Multi joint Upper Limb Protheses. Presence. Massachusetts Institute of Technology. 2012;21(1):85-95.
Favorskay a et al., Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2015; XL-5/W6:1-8.
Hauschild et al., A Virtual Reality Environment for Designing and Fitting Neural Prosthetic Limbs. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007; 15( 1) :9-15.
Lee et al., Motion and Force Estimation System of Human Fingers. Journal of Institute of Control, Robotics and Systems. 2011;17(10):1014-1020.
Lopes et al., Hand/arm gesture segmentation by motion using IMU and EMG sensing. ScienceDirect. Elsevier. Procedia Manufacturing. 2017; 11: 107-13.
Martin et al., A Novel Approach of Prosthetic Arm Control using Computer Vision, Biosignals, and Motion Capture. IEEE. 2014. 5 pages.
Mendes et al., Sensor Fusion and Smart Sensor in Sports and Biomedical Applications. Sensors. 2016;16(1569): 1-31.
Sartori et al., Neural Data-Driven Musculoskeletal Modeling for Personalized Neurorehabilitation Technologies. IEEE Transactions on Biomedical Engineering.2016;63(5):879-93.
International Search Report and Written Opinion for International Application No. PCT/US2019/015180 dated May 28, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/028299 dated Aug. 9, 2019.
Invitation to Pay Additional Fees for International Application No. PCT/US2019/031114 dated Aug. 6, 2019.
Gopura et al., A Human Forearm and wrist motion assist exoskeleton robot with EMG-based fuzzy-neuro control. Proceedings of the 2nd IEEE/RAS-EMBS International Conference on Biomedial Robotics and Biomechatronics. Oct. 19-22, 2008. 6 pages.
Valero-Cuevas et al., Computational Models for Neuromuscular Function. NIH Public Access Author Manuscript. Jun. 16, 2011. 52 pages.
Yang et al., Surface EMG based handgrip force predictions using gene expression programming. Neurocomputing. 2016;207:568-579.
Extended European Search Report for European Application No. EP 17835111.0 dated Nov. 21, 2019.
Extended European Search Report for European Application No. EP 17835140.9 dated Nov. 26, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/037302 dated Oct. 11, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/034173 dated Sep. 18, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/042579 dated Oct. 31, 2019.
Invitation to Pay Additional Fees for International Application No. PCT/US2019/049094 dated Oct. 24, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/052131 dated Dec. 6, 2019.
International Search Report and Written Opinion for International Application No. PCT/US2019/046351 dated Nov. 7, 2019.
Al-Mashhadany, Inverse Kinematics Problem (IKP) of 6-DOF Manipulator Bgy Locally Recurrent Neural Networks (LRNNs). Management and Service Science (MASS). 2010 International Conference on, IEEE. Aug. 24, 2010. 5 pages. ISBN: 978-1-4244-5325-2.
Kipke et al., Silicon-substrate Intracortical Microelectrode Arrays for Long-Term Recording of Neuronal Spike Activity in Cerebral Cortex. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2003; 11(2): 151-155.
Marcard et al., Sparse Inertial Poser: Automatic 3D Human Pose Estimation from Sparse IMUs. Eurographics. 2017;36(2). 12 pages.
Mohamed, Homogeneous cognitive based biometrics for static authentication. Dissertation submitted to University of Victoria, Canada. 2010. 149 pages.URL:http://hdl.handle.net/1828/321 I [last accessed Oct. 11, 2019].
Wittevrongel et al., Spatiotemporal Beamforming: A Transparent and Unified Decoding Approach to Synchronous Visual Brain-Computer Interfacing. Frontiers in Neuroscience. 2017;11:1-12.
Zacharaki et al., Spike pattern recognition by supervised classification in low dimensional embedding space. Brain Informatics. 2016;3:73-8. DOI: 10.1007/s40708-016-0044-4.
Extended European Search Report received for EP Patent Application Serial No. 17835112.8 dated Feb. 5, 2020, 17 pages.
Non-Final Office Action received for U.S. Appl. No. 15/882,858 dated Oct. 30, 2019, 22 pages.
Non-Final Office Action received for U.S. Appl. No. 15/882,858 dated Sep. 2, 2020, 66 pages.
Non-Final Office Action received for U.S. Appl. No. 15/974,454 dated Aug. 20, 2020, 59 pages.
Notice of Allowance received for U.S. Appl. No. 16/557,427 dated Aug. 19, 2020, 22 pages.
Final Office Action received for U.S. Appl. No. 15/882,858 dated Jun. 2, 2020, 127 pages.
Non-Final Office Action received for U.S. Appl. No. 15/659,072 dated Jun. 5, 2020, 59 pages.
Non-Final Office Action received for U.S. Appl. No. 16/353,998 dated May 26, 2020, 60 pages.
Non-Final Office Action received for U.S. Appl. No. 15/974,430 dated Apr. 30, 2020, 57 pages.
Final Office Action received for U.S. Appl. No. 16/557,383 dated Jun. 2, 2020, 66 pages.
Final Office Action received for U.S. Appl. No. 16/557,427 dated Jun. 5, 2020, 95 pages.
Non-Final Office Action received for U.S. Appl. No. 16/557,342 dated Jun. 15, 2020, 46 pages.
Continuations (1)
Number Date Country
Parent 15974384 May 2018 US
Child 16785680 US