The present systems, devices, and methods generally relate to wearable devices that perform automated gesture identification in real-time, and methods of performing such.
Electronic devices are commonplace throughout most of the world today. Advancements in integrated circuit technology have enabled the development of electronic devices that are sufficiently small and lightweight to be carried by the user. Such “portable” electronic devices may include on-board power supplies (such as batteries or other power storage systems) and may be designed to operate without any wire-connections to other electronic systems; however, a small and lightweight electronic device may still be considered portable even if it includes a wire-connection to another electronic system. For example, a microphone may be considered a portable electronic device whether it is operated wirelessly or through a wire-connection.
The convenience afforded by the portability of electronic devices has fostered a huge industry. Smartphones, audio players, laptop computers, tablet computers, and ebook readers are all examples of portable electronic devices. However, the convenience of being able to carry a portable electronic device has also introduced the inconvenience of having one's hand(s) encumbered by the device itself. This problem is addressed by making an electronic device not only portable, but wearable.
A wearable electronic device is any portable electronic device that a user can carry without physically grasping, clutching, or otherwise holding onto the device with their hands. For example, a wearable electronic device may be attached or coupled to the user by a strap or straps, a band or bands, a clip or clips, an adhesive, a pin and clasp, an article of clothing, tension or elastic support, an interference fit, an ergonomic form, etc. Examples of wearable electronic devices include digital wristwatches, electronic armbands, electronic rings, electronic ankle-bracelets or “anklets,” head-mounted electronic display units, hearing aids, and so on.
A wearable electronic device may provide direct functionality for a user (such as audio playback, data display, computing functions, etc.) or it may provide a mechanism with which to interact, communicate, or control another electronic device. For example, a wearable electronic device may include sensors that detect inputs effected by a user and the device may transmit signals to another electronic device based on those inputs. Sensor-types and input-types may each take on a variety of forms, including but not limited to: tactile sensors (e.g., buttons, switches, touchpads, or keys) providing manual control, acoustic sensors providing voice-control, electromyography sensors providing gesture control, and/or accelerometers providing gesture control.
A human-computer interface (“HCI”) is an example of a human-electronics interface. The present systems, devices, and methods may be applied to HCIs, but may also be applied to any other form of human-electronics interface.
A gesture identification system is any system that, in use, detects when a user performs a physical gesture and identifies, classifies, and/or recognizes the gesture that the user has performed. In practice, a gesture identification system may have a limited capacity to identify, classify, and/or recognize a predefined set of gestures and/or gesture types.
A gesture identification system may include multiple physically disparate components or it may comprise a single gesture identification device. In either case, a gesture identification system may include a wearable electronic device with on-board sensors (such as photosensors/cameras (optical, infrared, or otherwise), electromyography sensors, and/or accelerometer sensors) to detect physical gestures performed by the user. In particular, electromyography (or “EMG”) sensors may be used to detect the electrical signals produced by muscle activity when the user performs a physical gesture. Human-electronics interfaces that employ EMG sensors have been proposed. For example, U.S. Pat. No. 6,244,873 and U.S. Pat. No. 8,170,656 describe such systems.
In a typical example of such a system, the user dons a wearable EMG device and performs physical gestures to control functions of a separate electronic device. EMG signals corresponding to each user-performed gesture are detected by the wearable EMG device and then either processed by the wearable EMG device itself using an on-board processor or transmitted to a separate computer system for processing. In either case, processing the EMG signals typically involves automatically identifying the corresponding gesture(s) performed by the user based on the EMG signals. It is advantageous to perform gesture identification on-board the wearable EMG device itself (i.e., using an on-board processor) because doing so enables a wider-range of electronic devices to be controlled.
Known proposals for gesture identification systems that employ one or more wearable electronic device(s) (including but not limited to wearable EMG devices) are not immediately able to accurately and reliably identify gestures performed by any generic user or even by the same user under different use conditions (i.e., once the wearable device has been removed and put back on in a different position and/or with a different orientation). On the contrary, known proposals for gesture identification systems that employ one or more wearable electronic device(s) (including the two examples described in the US patents above) typically require any given user to undergo an elaborate training procedure in order to calibrate the wearable device each time the user puts on the device. A typical training procedure, carried out before the system is operable to identify gestures performed by the user (i.e., pre-runtime), requires the user to perform a series of training trials for multiple training gestures (i.e., multiple training trials for each one of multiple training gestures). The system calibrates use parameters based on the signals (e.g., the EMG signals) detected during the training gestures. The quality of the calibration typically increases with the number of training trials performed for each training gesture, so the training procedure may involve many training trials. Once a user has completed the training procedure, the system may perform reasonably well (i.e., during runtime) at identifying gestures of the specific user and under the specific use conditions for which it has been calibrated, but the system may perform very poorly at identifying gestures of other users and/or at identifying gestures of the same user under different use conditions (such as when the user sweats or after the wearable device has been removed and put back on in a different position, with a different rotation, and/or with a different orientation). If a different user wishes to use the wearable device, then that different user must go through the training procedure in order to recalibrate the system to work for them. Furthermore, if the same user removes the wearable device (because, for example, the user has finished using it for a period of time), then that same user typically needs to undergo the training procedure again when the wearable device is re-worn, since any subsequent use of the wearable device may involve slightly different use conditions (such as, for example, different device position, rotation, and/or orientation, and/or different skin conditions such as temperature, moisture, hair density, and so on) which may give rise to different use parameters. The requirement for each user to undergo an elaborate training procedure for each use and the inability to readily identify gestures of any generic user are limitations of known proposals for gesture identification systems employing one or more wearable electronic device(s) that degrade the overall user experience. Clearly, there is a need in the art for gesture identification systems and/or devices that perform gesture identification with improved robustness against variations in use parameters.
A method of operating a gesture identification system to identify a user-performed gesture, the gesture identification system including at least one sensor responsive to (i.e., that in use detects) user-performed gestures and a processor communicatively coupled to the at least one sensor, may be summarized as including: providing at least one signal from the at least one sensor to the processor; segmenting the at least one signal into data windows; for each ith data window in at least a subset of the data windows: determining a window class for the ith data window by the processor, the window class selected by the processor from a library of window classes, wherein each window class in the library of window classes exclusively characterizes at least one data window property; determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture based on a) the window class for the ith data window and, when i>1, b) the window class for at least one jth data window, where j<i; and identifying a highest-probability gesture for the ith data window by the processor, the highest-probability gesture corresponding to the gesture in the gesture library that has a highest probability of being the user-performed gesture for the ith data window; and identifying the user-performed gesture by the processor based on the respective highest-probability gestures for at least two data windows in the at least a subset of data windows. Providing at least one signal from the at least one sensor to the processor may include providing at least one substantially continuous data stream from the at least one sensor to the processor, and segmenting the at least one signal into data windows may include segmenting the at least one substantially continuous data stream into data windows in real-time by the processor.
The method may further include detecting initiation of the user-performed gesture by the processor based on at least one property of at least one data window. Detecting initiation of the user-performed gesture by the processor based on at least one property of at least one data window may include detecting initiation of the user-performed gesture by the processor based on at least one Root Mean Square (“RMS”) value for the at least one data window.
Each window class in the library of window classes may exclusively characterize a respective range of values for the same at least one data window property. Each window class in the library of window classes may exclusively characterize a respective range of values for at least one Root Mean Square (“RMS”) value for the ith data window.
The at least one sensor may include a plurality of sensors, and: providing at least one signal from the at least one sensor to the processor may include providing a respective signal from each respective sensor in the plurality of sensors to the processor; and segmenting the at least one signal into data windows may include segmenting the signal from each respective sensor in the plurality of sensors into the data windows, wherein each data window includes a respective portion of the signal from each respective sensor in the plurality of sensors. The plurality of sensors may include N sensors, and each window class in the library of window classes may exclusively characterize a respective region in an N-dimensional hyperspace and each dimension of the N-dimensional hyperspace represents a property of the signal from a respective one of the N sensors. Each region of the N-dimensional hyperspace may represent a respective combination of ranges for Root Mean Square (“RMS”) values of the signals from the N sensors. For each window class in the library of window classes, the corresponding region in the N-dimensional hyperspace may be exclusively characterized by at least one respective angle formed in the N-dimensional hyperspace.
The at least one sensor may include at least one muscle activity sensor selected from the group consisting of: an electromyography (EMG) sensor and a mechanomyography (MMG) sensor, and providing at least one signal from the at least one sensor to the processor may include providing at least one signal from the at least one muscle activity sensor to the processor. In addition or alternatively, the at least one sensor may include at least one inertial sensor selected from the group consisting of: an accelerometer, a gyroscope, and an inertial measurement unit (IMU), and providing at least one signal from the at least one sensor to the processor may include providing at least one signal from the at least one inertial sensor to the processor.
The gesture identification system may further include a non-transitory processor-readable storage medium that stores processor-executable gesture identification instructions, the non-transitory processor-readable storage medium communicatively coupled to the processor, and the method may further include: executing by the processor the gesture identification instructions to cause the processor to: determine the window class for each ith data window; determine the respective probability that each gesture in the gesture library is the user-performed gesture for each ith data window; identify the highest-probability gesture for each ith data window; and identify the user-performed gesture. The non-transitory processor-readable storage medium may further store the library of window classes and the gesture library.
Determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture may include: determining, by the processor, at least one respective n-gram transition model for each gesture in the gesture library; and determining, by the processor and for each gesture in the gesture library, a probability that the gesture is the user-performed gesture based at least in part on the at least one n-gram transition model for the gesture. Determining, by the processor, at least one respective n-gram transition model for each gesture in the gesture library may include determining, by the processor and for each gesture in the gesture library, at least one respective n-gram transition model selected from the group consisting of: a unigram transition model based on the window class of the ith data window and a bigram transition model based on the respective window classes of the ith data window and the jth=(i−1)th data window.
For each ith data window, determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture may include determining, by the processor, a respective probability that each gesture in the gesture library is the user-performed gesture based on a) the window class for the ith data window and b) the window class for a single jth data window, where j is selected from the group consisting of: j=(i−1) and j<(i−1).
The gesture identification system may comprise a wearable gesture identification device that includes the at least one sensor and the processor, the wearable gesture identification device in use worn on a limb of the user, and: for each ith data window in the at least a subset of the data windows: determining a window class for the ith data window by the processor may include determining a window class for the ith data window by the processor of the wearable gesture identification device; determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture may include determining, by the processor of the wearable gesture identification device, a respective probability that each gesture in the gesture library is the user-performed gesture; and identifying a highest-probability gesture for the ith data window by the processor may include identifying a highest-probability gesture for the ith data window by the processor of the wearable gesture identification device; and identifying the user -performed gesture by the processor may include identifying the user-performed gesture by the processor of the wearable gesture identification device.
A gesture identification system may be summarized as including at least one sensor responsive to (i.e., to detect, sense, measure, or transduce) a physical gesture performed by a user of the gesture identification system, wherein in response to a physical gesture performed by the user the at least one sensor provides at least one signal; a processor communicatively coupled to the at least one sensor; and a non-transitory processor-readable storage medium communicatively coupled to the processor, wherein the non-transitory processor-readable storage medium stores processor-executable gesture identification instructions that, when executed by the processor, cause the gesture identification device to: segment the at least one signal into data windows; for each ith data window in at least a subset of the data windows: determine a window class for the ith data window, the window class selected from a library of window classes, wherein each window class in the library of window classes exclusively characterizes at least one data window property; determine a respective probability that each gesture in a gesture library is the gesture performed by the user based on a) the window class for the ith data window and, when i>1, b) the window class for at least one jth data window, where j<i; and identify a highest-probability gesture for the ith data window, the highest-probability gesture corresponding to the gesture in the gesture library that has a highest probability of being the gesture performed by the user for the ith data window; and identify the gesture performed by the user based on the respective highest-probability gestures for at least two data windows in the at least a subset of data windows. The at least one sensor may include a sensor selected from the group consisting of: a muscle activity sensor, an electromyography (EMG) sensor, a mechanomyography (MMG) sensor, an inertial sensor, an accelerometer, a gyroscope, and an inertial measurement unit (IMU).
The gesture identification system may further include a communication terminal communicatively coupled to the processor to in use transmit at least one signal in response to the processor identifying the gesture performed by the user.
The gesture identification system may further include a wearable gesture identification device comprising a band that in use is worn on a limb of the user, wherein the at least one sensor, the processor, and the non-transitory processor-readable storage medium are all carried by the band. The wearable gesture identification device may further include a set of pod structures that form physically coupled links of the wearable gesture identification device, wherein each pod structure in the set of pod structures is positioned adjacent and in between two other pod structures in the set of pod structures and physically coupled to the two other pod structures in the set of pod structures by the band, and wherein the set of pod structures forms a perimeter of an annular configuration.
Each window class in the library of window classes may exclusively characterize a respective range of values for the same at least one data window property.
The at least one sensor may include a plurality of sensors, each of which is responsive to physical gestures performed by the user, and the processor-executable gesture identification instructions, when executed by the processor, may cause the gesture identification device to: segment the respective signal from each respective sensor in the plurality of sensors into the data windows, wherein each data window includes a respective portion of the signal from each respective sensor in the plurality of sensors.
The at least one sensor may include N sensors, when N≥1, and each window class in the library of window classes may exclusively characterize a respective region in an N-dimensional hyperspace, each dimension of the N-dimensional hyperspace representing a property of the signal from a respective one of the N sensors. Each region of the N-dimensional hyperspace may represent a respective combination of ranges for Root Mean Square (“RMS”) values of the signals from the N sensors. For each window class in the library of window classes, the corresponding region in the N-dimensional hyperspace may be exclusively characterized by at least one respective angle formed in the N-dimensional hyperspace.
A method of operating a gesture identification system to identify a user-performed gesture, the gesture identification system including at least one sensor that is responsive to (i.e., that in use detects) user-performed gestures and a processor communicatively coupled to the at least one sensor, may be summarized as including: in response to a user-performed gesture, providing at least one signal from the at least one sensor to the processor; segmenting the at least one signal into data windows; for each data window in at least a subset of the data windows, assigning a window class to the data window by the processor, each respective window class selected by the processor from a library of window classes, wherein each window class in the library of window classes exclusively characterizes at least one data window property; and identifying the user-performed gesture by the processor based at least in part on the respective window classes of at least two data windows.
The at least one sensor may include N sensors, where N≥1, and each window class in the library of window classes may exclusively characterize a respective region in an N-dimensional hyperspace and each dimension of the N-dimensional hyperspace may represent a property of the signal from a respective one of the N sensors. Each region of the N-dimensional hyperspace may represent a respective combination of ranges for Root Mean Square (“RMS”) values of the signals from the N sensors. For each window class in the library of window classes, the corresponding region in the N-dimensional hyperspace may be exclusively characterized by at least one respective angle formed in the N-dimensional hyperspace.
Providing at least one signal from the at least one sensor to the processor may include providing at least one substantially continuous data stream from the at least one sensor to the processor, and segmenting the at least one signal into data windows may include segmenting the at least one substantially continuous data stream into data windows in real-time by the processor.
The method may further include detecting initiation of the user-performed gesture by the processor based on at least one property of at least one data window. Detecting initiation of the user-performed gesture by the processor based on at least one property of at least one data window may include detecting initiation of the user-performed gesture by the processor based on at least one Root Mean Square (“RMS”) value for the at least one data window.
Each window class in the library of window classes may exclusively characterize a respective range of values for the same at least one data window property. Each window class in the library of window classes may exclusively characterize a respective range of values for at least one data window Root Mean Square (“RMS”) value.
The at least one sensor may include a plurality of sensors, and: providing at least one signal from the at least one sensor to the processor may include providing a respective signal from each respective sensor in the plurality of sensors to the processor; and segmenting the at least one signal into data windows may include segmenting the respective signal from each respective sensor in the plurality of sensors into the data windows, wherein each data window includes a respective portion of the signal from each respective sensor in the plurality of sensors.
The method may further include: determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture based on the respective window classes of at least two data windows, and identifying the user-performed gesture by the processor may include identifying, by the processor, a highest-probability gesture, the highest-probability gesture corresponding to a gesture in the gesture library that has a highest probability of being the user-performed gesture. Determining, by the processor, a respective probability that each gesture in a gesture library is the user-performed gesture may include: determining, by the processor, at least one respective n-gram transition model for each gesture in the gesture library; and determining, by the processor and for each gesture in the gesture library, a probability that the gesture is the user-performed gesture based at least in part on the at least one n-gram transition model for the gesture. Determining, by the processor, at least one respective n-gram transition model for each gesture in the gesture library may include determining, by the processor and for each gesture in the gesture library, at least one respective n-gram transition model selected from the group consisting of: a unigram transition model based on the window class of a single data window and a bigram transition model based on the respective window classes of two data windows.
The at least one sensor may include at least one muscle activity sensor selected from the group consisting of: an electromyography (EMG) sensor and a mechanomyography (MMG) sensor, and providing at least one signal from the at least one sensor to the processor may include providing at least one signal from the at least one muscle activity sensor to the processor. Either in addition or alternatively, the at least one sensor may include at least one inertial sensor selected from the group consisting of: an accelerometer, a gyroscope, and an inertial measurement unit (IMU), and providing at least one signal from the at least one sensor to the processor may include providing at least one signal from the at least one inertial sensor to the processor.
The gesture identification system may further include a non-transitory processor-readable storage medium that stores processor-executable gesture identification instructions, the non-transitory processor-readable storage medium communicatively coupled to the processor, and the method may further include: executing by the processor the gesture identification instructions to cause the processor to: for each data window in at least a subset of the data windows, assign the window class to the data window; and identify the user-performed gesture based at least in part on the respective window classes of at least two data windows. The non-transitory processor-readable storage medium may further store the library of window classes and the gesture library.
The gesture identification system may comprise a wearable gesture identification device that includes the at least one sensor and the processor, the wearable gesture identification device in use worn on a limb of the user, and: for each data window in at least a subset of the data windows, assigning a window class to the data window by the processor may include assigning a window class to the data window by the processor of the wearable gesture identification device; and identifying the user-performed gesture by the processor may include identifying the user-performed gesture by the processor of the wearable gesture identification device.
A gesture identification system may be summarized as including at least one sensor responsive to (i.e., to detect, sense, measure, or transduce) a physical gesture performed by a user of the gesture identification system, wherein in response to a physical gesture performed by the user the at least one sensor provides at least one signal; a processor communicatively coupled to the at least one sensor; and a non-transitory processor-readable storage medium communicatively coupled to the processor, wherein the non-transitory processor-readable storage medium stores processor-executable gesture identification instructions that, when executed by the processor, cause the gesture identification device to: segment the at least one signal into data windows; for each data window in at least a subset of the data windows, assign a window class to the data window; and identify the physical gesture performed by the user based at least in part on the respective window classes of at least two data windows. The at least one sensor may include a sensor selected from the group consisting of: a muscle activity sensor, an electromyography (EMG) sensor, a mechanomyography (MMG) sensor, an inertial sensor, an accelerometer, a gyroscope, and an inertial measurement unit (IMU).
The gesture identification system may further include a communication terminal communicatively coupled to the processor to in use transmit at least one signal in response to the processor identifying the physical gesture performed by the user.
The gesture identification system may further include a wearable gesture identification device comprising a band that in use is worn on a limb of the user, wherein the at least one sensor, the processor, and the non-transitory processor-readable storage medium are all carried by the band. The wearable gesture identification device may further include a set of pod structures that form physically coupled links of the wearable gesture identification device, wherein each pod structure in the set of pod structures is positioned adjacent and in between two other pod structures in the set of pod structures and physically coupled to the two other pod structures in the set of pod structures by the band, and wherein the set of pod structures forms a perimeter of an annular configuration.
Each window class in the library of window classes may exclusively characterize a respective range of values for the same at least one data window property.
The at least one sensor may include N sensors, where N≥1, and each window class in the library of window classes may exclusively characterize a respective region in an N-dimensional hyperspace and each dimension of the N-dimensional hyperspace may represent a property of the signal from a respective one of the N sensors. Each region of the N-dimensional hyperspace may represent a respective combination of ranges for Root Mean Square (“RMS”) values of the signals from the N sensors. For each window class in the library of window classes, the corresponding region in the N-dimensional hyperspace may be exclusively characterized by at least one respective angle formed in the N-dimensional hyperspace.
In the drawings, identical reference numbers identify similar elements or acts. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements and angles are not necessarily drawn to scale, and some of these elements are arbitrarily enlarged and positioned to improve drawing legibility. Further, the particular shapes of the elements as drawn are not necessarily intended to convey any information regarding the actual shape of the particular elements, and have been solely selected for ease of recognition in the drawings.
In the following description, certain specific details are set forth in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that embodiments may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with electronic devices, and in particular portable electronic devices such as wearable electronic devices, have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.
Unless the context requires otherwise, throughout the specification and claims which follow, the word “comprise” and variations thereof, such as, “comprises” and “comprising” are to be construed in an open, inclusive sense, that is as “including, but not limited to.”
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
As used in this specification and the appended claims, the singular forms “a,” “an,” and “the” include plural referents unless the content clearly dictates otherwise. It should also be noted that the term “or” is generally employed in its broadest sense, that is as meaning “and/or” unless the content clearly dictates otherwise.
The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the embodiments.
The various embodiments described herein provide systems, devices, and methods for gesture identification with improved robustness against variations in use parameters. This improved robustness enables reliable and accurate gesture identification for most generic users and/or under most generic use conditions without requiring the user to undergo an elaborate training procedure. Furthermore, the gesture identification methods described herein require only limited computational resources, which provides numerous benefits for wearable gesture identification devices, including without limitation: extending battery life, enhancing the speed of the gesture identification process, enhancing the quality of the gesture identification, simplifying the on-board processor and associated infrastructure, reducing cost, and reducing overall mass and complexity.
Throughout this specification and the appended claims, the term “gesture” is used to generally refer to a deliberate physical action (e.g., a movement, a stretch, a flex, a pose) performed or otherwise effected by a user. Any physical action deliberately performed or otherwise effected by a user that involves detectable signals (such as muscle activity detectable by at least one appropriately positioned muscle activity sensor, e.g., an electromyography sensor, and/or motion detectable by at least one appropriately positioned inertial sensor, e.g., an accelerometer and/or a gyroscope, and/or motion/posturing detectable by at least one optical sensor, e.g., an infrared sensor) may constitute a gesture in the present systems, devices, and methods.
In accordance with the present systems, devices, and methods, a gesture identification system includes sensors responsive to (i.e., to detect, sense, measure, or transduce) user-performed gestures and to provide one or more signal(s) in response to a user-performed gesture. As examples, device 100 includes muscle activity sensors 110 (only one called out in
Sensors 110 and 140 provide signals in response to user-performed gestures. In order to process these signals, device 100 includes an on-board processor 120 communicatively coupled to sensors 110 and 140. Processor 120 may be any type of processor, including but not limited to: a digital microprocessor or microcontroller, an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), a digital signal processor (DSP), a graphics processing unit (GPU), a programmable gate array (PGA), a programmable logic unit (PLU), or the like. The methods and techniques by which processor 120 processes the signals from sensors 110 and 140 are controlled by processor-executable gesture identification instructions 131 stored in a non-transitory processor-readable storage medium or memory 130 communicatively coupled to processor 120. When executed by processor 120, gesture identification instructions 131 cause device 100 to identify one or more user-performed gesture(s) based on the signals provided by sensors 110 and 140 in accordance with the present systems, devices, and methods.
Throughout this specification and the appended claims the term “communicative” as in “communicative pathway,” “communicative coupling,” and in variants such as “communicatively coupled,” is generally used to refer to any engineered arrangement for transferring and/or exchanging information. Exemplary communicative pathways include, but are not limited to, electrically conductive pathways (e.g., electrically conductive wires, electrically conductive traces), magnetic pathways (e.g., magnetic media), and/or optical pathways (e.g., optical fiber), and exemplary communicative couplings include, but are not limited to, electrical couplings, magnetic couplings, and/or optical couplings.
In addition to sensors 110 and 140, processor 120, and memory 130, device 100 also includes a battery 160 to power device 100 and at least one communication terminal 150 to transmit at least one signal in response to processor 120 identifying a user-performed gesture. Throughout this specification and the appended claims, the term “communication terminal” is generally used to refer to any physical structure that provides a telecommunications link through which a data signal may enter and/or leave a device. A communication terminal represents the (or “terminus”) of communicative signal transfer within a device and the beginning of communicative signal transfer to/from an external device (or external devices). As examples, communication terminal 150 of device 100 may include a wireless transmitter that implements a known wireless communication protocol, such as Bluetooth®, WiFi®, or Zigbee®, and/or a tethered communication port such as Universal Serial Bus (USB) port, a micro-USB port, a Thunderbolt® port, and/or the like. Device 100 also includes a set of eight pod structures (not called out in
Throughout this specification and the appended claims, the term “pod structure” is used to refer to an individual link, segment, pod, section, structure, component, etc. of a wearable electronic device. For the purposes of the present systems, devices, and methods, an “individual link, segment, pod, section, structure, component, etc.” (i.e., a “pod structure”) of a wearable electronic device is characterized by its ability to be moved or displaced relative to another link, segment, pod, section, structure component, etc. of the wearable electronic device. Device 100 includes eight pod structures, but wearable electronic devices employing pod structures (e.g., device 100) are used herein as exemplary devices only and the present systems, devices, and methods may be applied to/with wearable electronic devices that do not employ pod structures or that employ any number of pod structures.
Throughout this specification and the appended claims, the terms “gesture identification system” and “wearable gesture identification device” are used substantially interchangeably because a wearable gesture identification device is used as an example of a gesture identification system herein. The methods described herein may be implemented using a gesture identification system that may or may not include, comprise, or consist entirely of a wearable gesture identification device. For greater certainty, a “gesture identification system” and a “wearable gesture identification device” both include at least one sensor responsive to (i.e., to detect, sense, measure, or transduce) user-performed gestures and a processor communicatively coupled to the at least one sensor. In a “gesture identification system,” the at least one sensor and the processor may both be components of a single device or they may each be respective components of two or more physically disparate devices, whereas a “gesture identification device” is a single device that comprises both the at least one sensor and the processor. Thus, in the examples of the gesture identification methods that follow, in a “wearable gesture identification device” the gesture identification acts performed, carried out, and/or completed by a processor (collectively, the “gesture identification process”) are generally performed, carried out, and/or completed on-board the wearable gesture identification device by an on-board processor, whereas in a gesture identification system the gesture identification process may be performed, carried out, and/or completed by a processor on-board a wearable component of the system or by a processor that is physically separate from a wearable component of the system (such as by the processor of a laptop or desktop computer).
The various embodiments described herein provide methods, algorithms, and/or techniques for performing automated gesture identification with enhanced robustness against variations in use parameters, as well as exemplary wearable gesture identification devices, such as device 100, that in use implement such methods, algorithms, and/or techniques. Throughout the descriptions of the methods, algorithms, and/or techniques that follow, reference is often made to the elements of device 100 from
Throughout this specification and the appended claims, “identifying” a user-performed gesture means associating at least one signal provided by one or more sensor(s) (110 and/or 140) with a particular physical gesture. In the various embodiments described herein, “identifying” a gesture includes determining which gesture in a gesture library has the highest probability (relative to the other gestures in the gesture library) of being the physical gesture that a user has performed or is performing in order to produce the signal(s) upon which the gesture identification is at least partially based, and returning that gesture as the “identity” of the user-performed gesture. Throughout this specification and the appended claims, the term “gesture library” is used to generally describe a set of physical gestures that a gesture identification system (100) is operative to identify. The gesture identification systems, devices, and methods described herein may not be operative to identify any arbitrary gesture performed by a user. Rather, the gesture identification systems, devices, and methods described herein may be operative to identify when a user performs one of a specified set of gestures, and that specified set of gestures is referred to herein as a gesture library. A gesture library may include any number of gestures, though a person of skill in the art will appreciate that the precision/accuracy of gesture identification may be inversely related to the number of gestures in the gesture library. A gesture library may be expanded by adding one or more gesture(s) or reduced by removing one or more gesture(s). Furthermore, in accordance with the present systems, devices, and methods, a gesture library may include a “rest” gesture corresponding to a state for which no activity is detected and/or an “unknown” gesture corresponding to a state for which activity is detected but the activity does not correspond to any other gesture in the gesture library.
Method 200 includes four acts 201, 202, 203, and 204, though those of skill in the art will appreciate that in alternative embodiments certain acts may be omitted and/or additional acts may be added. Those of skill in the art will also appreciate that the illustrated order of the acts is shown for exemplary purposes only and may change in alternative embodiments. For the purpose of the present systems, devices, and methods, the term “user” refers to a person that is wearing the wearable gesture identification device (100).
At 201, at least one sensor (110 and/or 140) of the gesture identification system (100) provides at least one signal to the processor (120). Throughout this specification and the appended claims, the term “provide” and variants such as “provided” and “providing” are frequently used in the context of signals. Unless the specific context requires otherwise, the term “provide” is used in a most general sense to cover any form of providing a signal, including but not limited to: relaying a signal, outputting a signal, generating a signal, routing a signal, creating a signal, transducing a signal, and so on. For example, a surface EMG sensor may include at least one electrode that resistively or capacitively couples to electrical signals from muscle activity. This coupling induces a change in a charge or electrical potential of the at least one electrode which is then relayed through the sensor circuitry and output, or “provided,” by the sensor. Thus, the surface EMG sensor may “provide” an electrical signal by relaying an electrical signal from a muscle (or muscles) to an output (or outputs). In contrast, an inertial sensor may include components (e.g., piezoelectric, piezoresistive, capacitive, etc.) that are used to convert physical motion into electrical signals. The inertial sensor may “provide” an electrical signal by detecting motion and generating an electrical signal in response to the motion.
Providing at least one signal from the at least one sensor (110 and/or 140) to the processor (120) at 201 may include providing at least one substantially continuous (i.e., time-varying) data stream from the at least one sensor (110 and/or 140) to the processor (120). In implementations in which the at least one sensor (110 and/or 140) includes a plurality of sensors (as in exemplary device 100), a respective signal may be provided to the processor (120) at 201 from each respective one of multiple (e.g., all) sensors (110 and/or 140) in the plurality of sensors.
At 202, the at least one signal provided from the at least one sensor (110 and/or 140) to the processor (120) at 201 is segmented into discrete data windows. This segmenting may be done by the processor (120), by elements of the sensors (110 and/or 140), or by some intermediate circuitry communicatively coupled in between the processor (120) and the sensors (110 and/or 140). For the purposes of the present systems, devices, and methods, the term “data window” refers to a particular section, parcel, segment, excerpt, subset, or portion of data from a larger collection of data, where the data in the data window all satisfy at least one bound or constraint. As an example, a data window may correspond to data collected within a defined range of a larger space of time. Thus, for data collected over a time period T, a data window may correspond to data collected during a portion t of the full time period T, where t<T.
Segmenting the at least one signal into discrete data windows per 202 generally includes segmenting the at least one signal into multiple data windows. Successive data windows (e.g., respective portions ti collected over a time period T) may be sequential, they may overlap, or there may be gaps therebetween. Thus, the term “discrete” in “discrete data windows” denotes that each data window is a respective section, parcel, segment, excerpt, subset, or portion of data from the larger collection of data and does not necessarily mean that there is no data in common between multiple data windows (e.g., as in the case of overlapping data windows). If the at least one signal provided by the at least one sensor (110 and/or 140) at 201 is a substantially continuous data stream, then at 202 the processor (120) may segment the at least one substantially continuous data stream into discrete data windows in real-time; that is, the processor (120) may segment the continuous data stream as the data stream comes into the processor (120) without first storing or archiving the data stream in a physically separate memory (though the data may be temporarily stored in a register or CPU cache of the processor (120) itself as part of normal processor operation). In implementations in which the at least one sensor (110 and/or 140) includes a plurality of sensors (as in exemplary device 100), act 202 may include segmenting the respective signal from each respective sensor (110 or 140) in the plurality of sensors into the same discrete data windows. For example, each ith data window (110 and/or 140) may correspond to the same ith window of time, ti, for the signal from each respective sensor, and each data window may include a respective portion of the signal from each respective sensor. An exemplary illustration of segmenting multiple signals into the same discrete data windows per 202 is provided in
For the data depicted in
Signals 301, 302, 303, and 304 represent signals provided by respective muscle activity sensors (110) for the duration of a user-performed gesture. The muscle activity sensors (110) providing signals 301, 302, and 304 detect substantial muscle activity during the user-performed gesture (and therefore provide signals of substantial amplitude) while the muscle activity sensor (110) providing signal 303 does not detect substantial muscle activity during the user-performed gesture (and therefore does not provide a signal of substantial amplitude).
As previously described, at 202 of method 200 the at least one signal provided by the at least one sensor (110) is segmented into discrete data windows. Graph 300 of
Each of data windows t1, t2, and t3 encapsulates data from all four signals 301, 302, 303, and 304 for a respective window of time. Specifically, data window t1 includes respective first portions from each of signals 301, 302, 303, and 304, the respective first portions contained within the same window of time t1 for each of sensors 301, 302, 303, and 304; window t2 includes respective second portions from each of signals 301, 302, 303, and 304, the respective second portions contained within the same window of time t2 for each of sensors 301, 302, 303, and 304; and window t3 includes respective third portions from each of signals 301, 302, 303, and 304, the respective third portions contained within the same window of time t3 for each of sensors 301, 302, 303, and 304. Exemplary data windows t1, t2, and t3 are sequential in
Returning to method 200, at 203 the processor (120) assigns a respective “window class” to each data window (e.g., each of data windows t1, t2, and t3 from
Throughout this specification and the appended claims, each window class in a library of window classes may represent a particular configuration of one or more sensor signal parameter(s) in, over, or for the interval of a single data window. In general, each window class in the library of window classes exclusively characterizes at least one data window property. In some implementations, all window classes may exclusively characterize a respective range of values for the same data window property. The at least one data window property characterized by a window class may be averaged or otherwise expressed over the entire data window interval. For example, for a gesture identification system (100) employing muscle activity sensors (110), the at least one signal (e.g., 301, 302, 303, and/or 304) provided by each sensor (110) may include a time-varying voltage signal and the data window property exclusively characterized by each window class may include a Root Mean Square (“RMS”) value (e.g., a particular range of RMS values) for the time-varying voltage signal over the data window interval. Other properties that may be averaged or otherwise characterized over an entire data window include, without limitation: a mean of the signal amplitude over the data window, a median of the signal amplitude over the data window, a mode of the signal over the data window, a mean power frequency of the signal over the data window, a maximum value of the signal over the data window, a variation of the signal over the data window, and/or a standard deviation of the signal over the data window. Throughout the remainder of this specification, the RMS value of a data window is used as an example of a data window property upon which window class definitions may be based; however, the present systems, devices, and methods are not limited to implementations in which window classes are defined by RMS values. In general, any data window property or data window properties (i.e., any combination of data window properties) may be used to define window classes in accordance with the present systems, devices, and methods.
A library or “alphabet” of window classes may be defined, developed, generated, and/or created by analyzing a collection of data windows. For example, in a “data collection” or “training” phase, one or many gesture identification system(s) (100) may be used to collect and save data windows for multiple trials of multiple gestures performed by multiple users. An example data collection or training phase may include performing acts 201 and 202 of method 200 for multiple trials of multiple gestures performed by multiple users and saving the resulting data windows to a database.
In the illustrated example, the distribution of data points 401 is not uniform throughout space 400. There are regions of exemplary space 400 in which data points 401 are sparsely distributed and there are regions of exemplary space 400 in which data points 401 are densely clustered. Example regions 411, 412, 413, 414, 415, and 416 of space 400 in which data points 401 are particularly densely clustered are indicated by dashed rectangles. Each of regions 411, 412, 413, 414, 415, and 416 encompasses a respective group of data points 401 that all represent RMS values within a respective exclusive range. For example, region 414 exclusively encompasses all data points 401 (i.e., all data windows) for which the RMS value of the first voltage signal V1 is in between A and B (i.e., A<RMS1(ti)<B) and the RMS value of the second voltage signal V2 is in between C and D (i.e., C<RMS2(ti)<D). Rectangular regions 411, 412, 413, 414, 415, and 416 are used in
Each of regions 411, 412, 413, 414, 415, and 416 in space 400 corresponds to a respective set of data windows having specific ranges of RMS values that occur with relatively high frequency and narrow variation. In accordance with the present systems, devices, and methods, each of regions 411, 412, 413, 414, 415, and 416 in space 400 may be defined as a respective window class in an implementation of method 200. For example, each of regions 411, 412, 413, 414, 415, and 416 of space 400 may directly map to a respective window class according to Table 1 below:
To emphasize the analogy between “window classes” of a library of window classes and “letters” of an alphabet, each window class is represented by a respective letter (i.e., A, B, C, D, E, F, and X) in Table 1, although a person of skill in the art will appreciate that a window class may be represented by any name, number, symbol (collectively characters) or combination of characters depending on the implementation. In the example of Table 1, the library of window classes consists of window classes A, B, C, D, E, F, and X.
Returning again to act 203 of method 200, the processor (120) assigns a respective window class to each data window. Using the example of Table 1 (which is based on regions 411, 412, 413, 414, 415, and 416 identified in 2-dimensional space 400 of
Throughout this specification and the appended claims, each window class in a library of window classes is often said to “exclusively characterize” at least one data window property. Generally, “exclusively characterizes” means that the window classes are defined/structured such that no data window can simultaneously satisfy the criteria of (i.e., be assigned or fall into) more than one window class. For example, regions 411, 412, 413, 414, 415, and 416 exclusively characterize data points 401 (per the mappings to window classes in Table 1) because there is no overlap between any of regions 411, 412, 413, 414, 415, and/or 416. In the example of space 400, it is clear that multiple regions may overlap for a single dimension (e.g., regions 413 and 414 overlap for a portion of the V1 signal), but the area, volume, or N-dimensional space of any region may advantageously not overlap that of any other region.
In
Angle θ420 is formed at the origin of space 400. Thus, as an alternative to mapping discrete cluster regions 411, 412, 413, 414, 415, and 416 to window classes as in the example of Table 1, space 400 may be divided into slices defined by ranges of angles at the origin and these slices may be mapped to window classes. An example is shown in Table 2 below, where angles are defined relative to the x-axis in the same way as illustrated for angle θ420 in
Angle θ420 and the angles listed in Table 2 are all defined at the origin relative to the x-axis; however, in alternative implementations the angle(s) that define one or more slice region(s) in an N-dimensional hyperspace may be otherwise defined, including without limitation: relative to a different axis (i.e., not the x-axis) of the N-dimensional hyperspace, relative to a point, line, vector, plane, area, volume, or hyperplane of the N-dimensional hyperspace, and so on. Furthermore, a person of skill in the art will appreciate that an N-dimensional slice region may be defined or otherwise characterized by multiple angles and/or other boundaries in an N-dimensional hyperspace. Angle θ420 and the angles of Table 2 are defined in a 2-dimensional space (400) solely for ease of illustration. In general, the various embodiments described herein may employ window classes that exclusively characterize regions of an N-dimensional hyperspace, including without limitation: cluster regions; slice regions; regions bounded by points, lines, planes, hyperplanes, etc.; irregularly shaped regions; and any combination of the foregoing. In some implementations, multiple disparate regions in an N-dimensional hyperspace (e.g., cluster regions 411 and 415) may be mapped to the same window class.
At least some embodiments of the present systems, devices, and methods achieve enhanced robustness against variations in use/user parameters by, at least in part, using window classes that correspond to ratios, angles, and/or ranges of angles in an N-dimensional hyperspace in order to accommodate ranges of signal amplitudes that correspond to substantially the same signal configuration. This concept is similar to the teachings of U.S. Provisional Patent Application Ser. No. 61/915,338 (now U.S. Non-Provisional patent application Ser. No. 14/567,826), which is incorporated herein by reference in its entirety.
Returning to method 200 of
In some implementations, the processor (120) may identify the user-performed gesture at 204 based, at least in part, on the relative probabilities that each gesture in a gesture library is the user-performed gesture. For example, method 200 may further include determining, by the processor (120), a respective probability P that each gesture in a gesture library is the user-performed gesture based on the respective window classes of at least two data windows. In this case, identifying the user-performed gesture by the processor (120) may include identifying, by the processor (120), a “highest-probability gesture” that corresponds to the gesture in the gesture library that has the highest probability (relative to the other gestures in the gesture library) of being the user-performed gesture.
In accordance with the present systems, devices, and methods, many different schemes, techniques, methods, formulae, calculations, algorithms, or most generally “approaches,” may be employed to determine, by the processor (120), a respective probability that each gesture in a gesture library is the user-performed gesture based on the respective window classes of at least two data windows. As an example and again borrowing from established natural language processing techniques, the processor (120) may determine such probabilities based, at least in part, on at least one respective n-gram transition model for each gesture in the gesture library. Example n-gram transition models include, without limitation: a unigram transition model based on the window class of a single data window, a bigram transition model based on the respective window classes of two data windows, and/or a trigram transition model based on the respective window classes for three data windows. For a given gesture g in a gesture library, the unigram Pg(A), defined in equation 1 below, gives the probability that window class A will occur among the data windows produced (e.g., at 202 of method 200) for that gesture g:
where Σi=1k count(Ai) sums over all of the window classes Ai in the library of window classes. Similarly, the bigram Pg(Aj|Ai), defined in equation 2 below, gives the probability that the sequence of window classes Ai, Aj (where i≠j denotes the data window) will occur among the data windows produced (e.g., at 202 of method 200) for a given gesture g:
In an exemplary implementation of the present systems, devices, and methods, for each gesture in a gesture library either the unigram, or the bigram, or both the unigram and bigram, may be used to determine the probability that the gesture is the user-performed gesture based on the respective window classes of at least two data windows. To illustrate, act 203 of method 200 may produce the following sequence of window classes (using the exemplary window classes from Table 1 or Table 2, with each window class in the sequence assigned to a respective data window per 203):
Using this exemplary sequence of window classes, the unigram from equation 1 may be used to determine the probability that each gesture g in a gesture library is the user-performed gesture as follows:
PUnig(AACA)=Pg(A)·Pg(A)·Pg(C)·Pg(A) (3)
For the same exemplary sequence of window classes, the bigram from equation 2 may be used to determine the probability that each gesture g in the gesture library is the user-performed gesture as follows:
PBig(AACA)≅Pg(A)·Pg(A|A)·Pg(C|A)·Pg(A|C) (4)
Given the unigram PUnig(AACA) and the bigram PBig(AACA) for the sequence of window classes AACA, the highest-probability gesture g* may be defined as, for example, the gesture in the gesture library that has either the maximum unigram, the maximum bigram, the maximum unigram or bigram, or the maximum value of an interpolation between the unigram and the bigram. An exemplary linear interpolation between the unigram and the bigram is given by equation 5:
PInterg(AACA)=αPUnig(AACA)+(1−α)PBig(AACA) (5)
where α may be any number between 0 and 1.
Another example of a probability measure that may be used in the present systems, devices, and methods (e.g., either instead of or in addition to the n-gram transition model(s) described above) is based on the pairwise simultaneous occurrences of at least two window classes over a subset of data windows. For a given gesture g in a gesture library, the pairwise model Pg(Ai∧Aj), defined in equation 6 below, gives the probability that window classes Ai and Aj will both occur (not necessarily in sequence) in a subset of data windows produced (e.g., at 202 of method 200) for that gesture g:
where Ng is the total number of data windows available, m is the number of data windows in the subset being examined, and g
Using the pairwise model from equation 6, the probability that a gesture g in a gesture library is the user-performed gesture for the exemplary sequence of window classes AACA may be determined using equation 7:
PPairg(AACA):=Pg(A∧A)·Pg(A∧C)·Pg(C∧A) (7)
In accordance with the present systems, devices, and methods, many different types of probability measures may be used to determine the respective probability that each gesture in a gesture library is the user-performed gesture and, furthermore, any number and/or combination of probability measures may be used to identify the highest-probability gesture. Generally, the highest-probability gesture g* may be defined, for the exemplary sequence of window classes AACA, by equation 8:
where gi tests each gesture in the gesture library and Px tests all of the different probability measures (e.g., unigram, bigram, linear interpolation, pairwise) explored in the implementation. For greater certainty, the pairwise model Pg(Ai∧Aj) may be combined with one or more n-gram transition model(s) using an interpolation such as the linear interpolation of equation 5.
When the signals provided by sensors (110 and/or 140) in response to a user-performed gesture (e.g., per act 201 of method 200) are segmented or otherwise broken down into data windows (e.g., per act 202 of method 200), the highest probability gesture may be determined at each ith data window based on the particular combination, permutation, or sequence of window classes for the data windows leading up to and including the ith data window. In accordance with the present systems, devices, and methods, the user-performed gesture may ultimately be identified by the processor (120; per act 204 of method 200) based, at least in part, on repeat occurrences of the identification of the same gesture from the gesture library as the highest-probability gesture determined at multiple successive data windows. Table 3 below provides an illustrative example of this concept.
The information in Table 3 is now summarized. Data/signals from one or more sensors (110 and/or 140) are segmented (per act 202 of method 200) into seven data windows t1, t2, t3, t4, t5, t6, and t7. This segmentation may occur in real-time as the datastream is continuously provided from the one or more sensors (110 and/or 140). While data window t2 is being provided by the sensor(s) (110 and/or 140), data window t1 is assigned (per act 203 of method 200) a window class A by the processor (120). When collection of data window t2 is completed, the processor (120) assigns (per act 203) window class A to data window t2. With at least two data windows (i.e., t1 and t2) now having been collected, the processor (120) determines the respective probability that each gesture in a gesture library is the user-performed gesture based on the data window sequence AA and identifies FIST as the highest-probability gesture. Data window t3 is then examined by the processor (120) and assigned (per act 203) window class C. Based on the sequence of window classes AAC up to and including data window t3, the processor (120) determines that the highest-probability gesture is POINT. The processor (120) then assigns (per act 203) window class A to data window t4 and determines that the highest-probability gesture for the sequence of window classes AACA is FLICK. Moving on to data windows t5, t6, and t7, the processor (120) assigns (per act 203) window classes B, F, and C, respectively, and identifies the highest-probability gesture for the sequence AACAB as FIST, for the sequence AACABF as FIST, and for the sequence AACABFC as FIST. With three successive determinations of FIST as the gesture having the highest probability of being the user-performed gesture, the processor (120) returns FIST as the identity of the user-performed gesture. At this point, the user may or may not have completed performing their FIST gesture. For example, the FIST gesture may have spanned seven data windows, or the FIST gesture may span more than seven data windows but the processor may have identified the gesture as FIST without needing to process more than seven data windows.
In accordance with the present systems, devices, and methods, the final determination/identification of the user-performed gesture by the processor (120) may be established in a variety of different ways depending on the particular implementation. In the illustrative example of Table 3, a prescribed number X of repeat determinations of the same gesture from the gesture library as the highest-probability gesture for multiple successive data windows is used as an exit criterion that must be satisfied before the processor (120) ultimately identifies/determines the user-performed gesture. In the example of Table 3, X=3 and the criterion is met over successive data windows t5, t6, and t7; however, in alternative embodiments X may be any number greater than or equal to 1, such as for example X=5. Furthermore, while the X criterion used in Table 3 requires repeat successive determinations of the same gesture as the highest-probability gesture, in alternative implementations an exit criterion may specify a number of repeat determinations of the same gesture as the highest-probability gesture without requiring that the repeat determinations be successive. For example, some implementations may require that the same gesture be identified as the highest-probability gesture Y times, but the Y occurrences may be distributed among any number of data windows in any order, combination, or permutation (e.g., Y=4, with a first occurrence in data window t2, a second occurrence in data window t5, a third occurrence in data window t6, and a fourth occurrence in data window t10), though in practice an upper bound on the number of data windows examined is often appropriate.
Some implementations make take the number of data windows into account during the gesture identification process. For example, the X and/or Y thresholds specified above may evolve (e.g., to provide an at least approximately fixed percentage W=X/N, where N is the current number of data windows) depending on the number of data windows being examined, or an exit criterion may include returning the gesture that has been identified most frequently as the highest-probability gesture after Z data windows. Some implementations may simply use a threshold probability P* for the highest-probability gesture, and return the current highest-probability gesture as the user-performed gesture if the probability of the highest-probability gesture being the user-performed gesture is greater than or equal to P*.
In some implementations, the number of data windows being analyzed may continue to grow, one at a time, with each successive data window (e.g., first just t1, then {t1, t2}, then {t1, t2, t3}, . . . , then {t1, t2, . . . , tn}, and so on). In other implementations, the number of data windows being examined may be capped at a predefined number C. In this latter scenario, the number of data windows being analyzed may increase one at a time until the cap C is reached, and then each new data window may replace a previous data window. For example, with the cap C=3, first data window t1 may be analyzed, then {t1, t2}, then {t1, t2, t3}, and thereafter with each new data window added a previous data window may be left off. In this way, after {t1, t2, t3}, t4 is added to the analysis and t1 is removed to produce {t2, t3, t4}, then {t3, t4, t5}, then {t5, t6, t7}, and so on. In different implementations, a cap C, if applied at all, may take on a variety of different numbers, such as 2, 3, 4, 5, 6, etc.
As previously described, a gesture identification system (100) may include a non-transitory processor-readable storage medium or memory (130) that is communicatively coupled to the processor (120), where the memory (130) may store a set of processor-executable gesture identification instructions (131) that, when executed by the processor (120), cause the processor (120) to, at least, assign a respective window class to each data window per act 203 of method 200 and identify the user-performed gesture based at least in part on the respective window classes of at least two data windows per act 204 of method 200. The processor-executable gesture identification instructions (131) may include instructions that, when executed by the processor (120), cause the processor (120) to determine a respective probability that each gesture in a gesture library is the user-performed gesture using, for example, the probability schemes of equations 1 through 8 described previously. The memory (130) may store the library of window classes from which individual window classes are selected and assigned to data windows at act 203 of method 200 and/or the memory (130) may store the gesture library from which the user-performed gesture is selected by the processor (120) based on the respective window classes of at least two data windows per act 204 of method 200. In some implementations, the processor-executable gesture identification instructions (131) may, when executed by the processor (120), cause the processor (120) to segment the at least one signal provided from the at least one sensor (110 and/or 140) into discrete data windows per act 202 of method 200.
In the various implementations described herein, at least some of the acts performed by the processor (120) of a gesture identification system (100) may be carried out substantially continuously and/or at least some of the acts performed by the processor (120) of a gesture identification system (100) may be carried out in an “as needed” or “on call” fashion. Limiting some acts to being performed only “as needed” or “on call” may conserve battery (160) power in a wearable gesture identification device (100). As an example, acts 201 and 202 of method 200 may be performed or carried out substantially continuously and/or at all times while device 100 in powered on and in use by a user, while act 203 (and consequently act 204) may only be performed or carried out in response to the user performing a physical gesture. To this , method 200 may further include detecting initiation of the user-performed gesture by the processor (120) based on at least one property of at least one data window. This may include detecting, by the processor (120), initiation of the user-performed gesture based on at least one RMS value for at least one data window. While a user is wearing a wearable electronic component of a gesture identification system (100), signals continuously provided by the sensors (110 and/or 140) may typically have low RMS values if the user is not performing a physical gesture. Initiation of a user-performed gesture may be detected as a spike in the RMS value(s) of one or more data window(s) corresponding to an increase in activity of whatever form to which the sensor(s) (110 and/or 140) is/are designed to respond (i.e., to detect, sense, measure, or transduce). For example, a spike in the RMS value from an EMG sensor (110) may indicate initiation of a muscle activity component of a user-performed gesture. In some implementations, an activation threshold may be defined and acts 203 and 204 of method 200 may only be carried out by the processor (120) in response to the RMS value of at least one data window exceeding the activation threshold.
As will be clear to a person of skill in the art based on the description of method 200, the various embodiments described herein include iterative methods for performing automated gesture identification in real-time, with each iteration corresponding to a respective data window from the data stream(s) provided by sensor(s) (110 and/or 140) of a wearable gesture identification device (100). This concept is illustrated in
Method 500 includes six acts 501, 502, 503, 504, 505, and 506, though those of skill in the art will appreciate that in alternative embodiments certain acts may be omitted and/or additional acts may be added. Those of skill in the art will also appreciate that the illustrated order of the acts is shown for exemplary purposes only and may change in alternative embodiments. Method 500 is substantially similar to method 200 from
Acts 501 and 502 of method 500 are substantially similar to acts 201 and 202 of method 200, respectively. At 501 at least one signal (e.g., at least one electrical signal) is provided from at least one sensor (110 and/or 140) of the gesture identification system (100) to the processor (120), and at 502 the at least one signal is segmented into discrete data windows. Acts 503, 504, and 505 of method 500 then detail the iterative processing of individual data windows. As indicated in
At 503, the processor (120) determines (e.g., assigns) a window class for the ith data window. The window class may be selected by the processor (120) from a library of window classes in a substantially similar way to that described for act 203 of method 200. As previously described, each window class exclusively characterizes at least one data window property (e.g., RMS value, region/angle in an N-dimensional hyperspace, etc.).
At 504, the processor (120) determines a respective probability that each gesture in a gesture library is the user-performed gesture based on:
In other words, at 504 the processor (120) determines a respective probability that each gesture in a gesture library is the user-performed gesture based on the window class of the current ith data window in the iteration cycle and (if the current data window is not the first data window in the iteration cycle) on the respective window classes of one or more previous jth data window(s) (j<i) in the iteration cycle. In some implementations, multiple different jth data windows (e.g., j=1, 3, and 7, where i>7) may be included in the determination of the respective probabilities, whereas in other implementations a single jth data window may be used, such as j=(i−1) (i.e., the immediately previous data window) or any j<(i−1). The respective probabilities may be determined using a wide range of methods, including but not limited to n-gram transition models and/or pairwise occurrences as outlined previously in equations 1 through 8. For n-gram transition models, a unigram transition model (equations 1 and 3) may be based on the window class of the ith data window (i.e., the current data window in the iteration cycle) and a bigram transition model (equations 2 and 4) may be based on the respective window classes of the ith data window and the jth=(i−1)th data window (i.e., the previous data window in the iteration cycle).
At 505, the processor (120) identifies a highest-probability gesture for the ith data window in a substantially similar way to that described for method 200. The highest-probability gesture may be defined by, for example, equation 8. After the highest-probability gesture is identified by the processor (120) for the ith data window, method 500 may return to act 503 and repeat acts 503, 504, and 505 for another data window (e.g., for the (i+1)th data window).
A series of iterations of acts 503, 504, and 505 (i.e., for a series of ith data windows: i, (i+1), (i+2), etc.) produces a series of identifications of the highest-probability gesture, with each identification of the highest-probability gesture corresponding to a respective one of the data windows. Returning to the illustrative example of Table 3, a first iteration of acts 503, 504, and 505 for a first data window i=1=t1 may return an inconclusive result for the highest-probability gesture (because, for example, t1 is the first data window and the probability formulae of equations 2 through 8 generally require at least two data windows), a second iteration of acts 503, 504, and 505 for a second data window i=2=t2 may return FIST as the highest-probability gesture, and so on. In accordance with the present systems, devices, and methods, the iteration cycle of acts 503, 504, and 505 may continue indefinitely or may terminate in response to a range of different criteria and/or conditions. For example, the iteration cycle of acts 503, 504, and 505 may continue even when method 500 proceeds to act 506, or the iteration cycle of acts 503, 504, and 505 may terminate in response to method 500 proceeding to act 506.
At 506, the processor (120) identifies the user-performed gesture based on the respective highest-probability gestures for at least two data windows (i.e., i and j, where i≠j) in the subset of data windows. In the illustrative example of Table 3, at 506 the processor (120) may return FIST as the user-performed gesture at data window i=6=t6 or i=7=t7 (depending on the implementation) because for both of those data windows the highest-probability gesture identified at 505 is a repeat occurrence of the same highest-probability gesture identified at 505 for the previous data window. As described previously, some implementations may identify the user-performed gesture at 506 based on X repeat occurrences of the same highest-probability gesture for X successive data windows (where X≥2), or Y repeat occurrences of the same highest-probability gesture distributed across Q data windows (where Q>Y), or based on a wide range of other potential conditions/criteria.
As previously described, a gesture identification system (100) may include a non-transitory processor-readable storage medium or memory (130) that is communicatively coupled to the processor (120), and the memory (130) may store processor-executable gesture identification instructions (131). In use, executing the gesture identification instructions (131) may cause the processor (120) to perform at least acts 503, 504, 505, and 506 of method 500. More specifically, the processor (120) may execute the gesture identification instructions (131) to cause the processor (120) to: determine the window class for each ith data window (per act 503), determine the respective probability that each gesture in the gesture library is the user-performed gesture for each ith data window (per act 504), identify the highest-probability gesture for each ith data window (per act 505), and identify the user-performed gesture based on the respective highest-probability gestures of at least two data windows (per act 506).
The ability of the gesture identification systems, devices (100), and methods described herein to accurately identify gestures may benefit, in some implementations, from specific information about at least some use parameters. For example, in order for a wearable gesture identification device (100) to perform accurate gesture identification as described herein, the algorithm(s) employed by the wearable device (100) may depend on and/or be influenced by the location, position, and/or orientation of the device's sensors (110 and/or 140) in relation to the body of the user. In accordance with the present systems, devices, and methods, all of the necessary information about the location, position, and/or orientation of the sensors (110 and/or 140) may be readily collected by the wearable device (100) by having the user perform a single reference gesture when the wearable device (100) is first donned. Such is a considerable improvement over the elaborate training procedures (requiring the user to perform a series of multiple trials for each of multiple gestures) required by known proposals for wearable devices that perform gesture identification.
A user may be instructed to don a wearable device (100) on, for example, one of their forearms with any orientation and at any location above the wrist and below the elbow that provides a comfortable, snug fit (although if muscle activity sensors are employed then best performance may be achieved with the device located proximate the most muscular part of the forearm, i.e., just below the elbow). A feature of exemplary wearable device 100 from
As an example, calibration may be done by:
As an alternative example, only one reference gesture template may be stored in the memory (130) of the wearable device (100), and when the user performs the reference gesture the configuration of the axes of the N-dimensional hyperspace in which the reference gesture template is defined may be varied until the reference gesture template best matches the incoming data representing the user's performance of the reference gesture. The configuration of the axes of the N-dimensional hyperspace in which the reference gesture template is defined that enables the reference gesture template to best match the incoming data from the user's performance of the reference gesture may then be used to calibrate the configuration of the axes of the N-dimensional hyperspace in which the gestures of the gesture library are defined (e.g., by matching the configuration of the axes of the N-dimensional hyperspace in which the gestures of the gesture library are defined to the configuration of the axes of the N-dimensional hyperspace for which the reference gesture template best matches the incoming data).
While many different gestures may be used as a reference gesture, an example of a suitable reference gesture is: begin with the arm (i.e., the arm upon which the device is worn) bent about 90 degrees at the elbow such that the upper arm (i.e., above the elbow) hangs loosely downwards from the shoulder and the lower arm (i.e., below the elbow) extends outwards in front of the user with the hand/fingers relaxed, the wrist straight, and the hand on its side with the palm facing inwards, then bend the wrist outwards such that the open palm faces forwards and extend the fingers (without deliberately splaying the fingers) to point outwards approaching ninety degrees to the forearm (i.e., as far past about thirty or forty-five degrees as is comfortable for the user) while calmly swiping the arm (rotating at the elbow) outwards away from the body.
As described above, a user may calibrate a wearable device (100) in accordance with the present systems, devices, and methods by performing only a single reference gesture. In some applications, no further training procedures may be required before the device can begin identifying gestures performed by the user.
In accordance with the present systems, devices, and methods, changes in the position and/or orientation of the wearable device (100) may produce changes (e.g., shifts, rotations, etc.) in the resulting signals provided by the sensors (110 and/or 140) when the user performs a physical gesture. An initial reference gesture as described herein is used to determine the “orientation” of the sensor signals. If the rotational orientation of device 100 is varied by, for example, 180 degrees, then the corresponding sensor signals may also be “rotationally reoriented” by 180 degrees. If the front-to-back orientation of device 100 is also varied, then the corresponding sensor signals may also be “front-to-back reoriented.” In either case (or in both cases), the gestures in the gesture library may be recalibrated to reflect the position and/or orientation of device 100 on the user's forearm based on the reference gesture.
The position and/or orientation of the wearable device (100) may change during use (e.g., during an extended session of continuous use, such as continuous use for on the order of hours, or from a physical bump or displacement). Accordingly, the various embodiments described herein may include monitoring a quality of match between the signal data provided by the sensors (110 and/or 140) and the gesture identified based on that signal data. In such implementations, the wearable device (100) may include processor-executable instructions stored in a non-transitory processor-readable storage medium (130) that, when executed by the processor (120) of the wearable device (100), cause the processor (120) to monitor a quality of match between the signal data provided by the sensors (110 and/or 140) and the gesture identified based on that signal data. If the quality of match shows signs of degradation (or, for example, the wearable device (100) is unable to recognize a gesture performed by the user after one or more attempts) then the wearable device (100) may be configured to prompt the user to perform or repeat the reference gesture. The wearable device (100) may prompt the user to perform or repeat the reference gesture by, for example, illuminating or flashing a corresponding light emitting diode (LED) or other visual indicator, by activating a vibratory motor or other actuator providing haptic or tactile feedback to the user, or through software on a separate device in communication with the wearable device (100), and so on. Alternatively, the user may identify degradation in the accuracy of gesture identification and volunteer to perform or repeat the reference gesture. The user may signify an intent to perform or repeat the reference gesture by, for example, toggling a switch or button on the wearable device (100), or by performing an unambiguously identifiable gesture such as tapping/smacking the wearable device (100) multiple times in quick succession (which is clearly detected by an inertial sensor (140)), etc. The wearable device (100) may be configured to sense when it has been removed by the user (e.g., by sensing an extended period of no inertial sensor (140) activity, or by identifying erratic signals that may be produced by muscle activity sensors (110) as they are decoupled from and/or when they are no longer coupled to the user's body) and to expect a reference gesture when it is put back on by a user. In some implementations, the wearable device (100) may detect, based on corresponding sensor signal data, “motion artifacts” corresponding to friction of one or more sensor electrode(s) over the user's skin. Such motion artifacts may produce characteristic signal data and when the wearable device (100) identifies such data the wearable device (100) may recognize that its position and/or orientation has changed relative to the skin of the user and therefore prompt the user to perform the reference gesture again.
In accordance with the present systems, devices, and methods, a reference gesture used for calibration purposes may (or may not) be distinct from the gestures in the gesture library that a wearable device is operative to identify. When the reference gesture is distinct from the gestures of the gesture library that the device is operative to identify, such is distinct from conventional calibration methods in which initial instances of specific quantities are typically captured and used to qualify subsequent instances of the same quantities, and advantageous in that a single reference gesture is used to calibrate the device for all gestures in the gesture library.
In the preceding examples, calibration of a wearable gesture identification device is used as an example of calibration of any gesture identification system. A person of skill in the art will appreciate that the calibration process(es) described above may similarly be implemented by a gesture identification system in which the sensor(s) is/are carried by one or more separate wearable electronic device(s) and the sensor signals are processed by a processor located off of the wearable electronic device(s).
Various embodiments of the present systems, devices, and methods are described as potentially (e.g., optionally) employing at least one activation threshold. As an example, acts 201 and 202 of method 200 (and similarly acts 501 and 502 of method 500) may be repeatedly or continuously performed by the gesture identification system (100) whenever the system (100) is powered on (and/or in an “active” state). However, acts 203 and 204 of method 200 (and similarly acts 503, 504, 505, and 506 of method 500) may only be triggered/completed/performed when at least one signal in the set of signals provided at act 201 (or similarly 501) exceeds a threshold. In the exemplary case of an activation threshold based on one or more RMS value(s), an RMS baseline value of each signal channel in its “rest” or “quiescent” state (i.e., when there is no activity detected) may first be determined and then further acts (e.g., 203 and 204, or 503, 504, 505, and 506) may only be triggered/completed/performed when at least one RMS value of at least one data window exceeds the corresponding “rest” or “quiescent” state for that signal channel by a defined percentage, such as by 50%, by 100%, by 150%, etc. In this case, the activation threshold is represented as the percentage (%) above the “rest” or “quiescent” state that an RMS value must reach. However, a “rest” or “quiescent” state RMS value may be zero, so a person of skill in the art will appreciate that other threshold schemes may be preferred, including but not limited to: a defined percentage (%) of the mean RMS value for the signal channel, a defined percentage (%) of the maximum RMS value for the signal channel, a fixed minimum RMS value, and so on. In some implementations, the definition of the activation threshold may adjust to accommodate new data (e.g., the mean RMS value for each signal channel may be continuously, repeatedly or periodically monitored and updated when applying an activation threshold based on the mean RMS value for each signal channel). In order to limit the number of “false positives,” it may be advantageous to implement multiple activation thresholds that must be exceeded substantially simultaneously (and/or a single activation threshold that must be exceeded by multiple values substantially simultaneously) by multiple sensor signal channels and/or at least one activation threshold that must be exceeded by multiple successive data windows.
In accordance with the present systems, devices, and methods, a user's reference gesture may be used to establish at least one activation threshold and/or to normalize signals for that particular user. The reference gesture may be, for example, deliberately selected to involve a Maximum Voluntary Contraction, or MVC, of the user (the exemplary reference gesture described herein is an example of this, where the outward extension of the fingers and bending back of the wrist reaches a maximum point of mobility for most users) and/or the user may be, for example, instructed to perform the reference gesture with particular vigor. In either case, the reference gesture may provide reference values (for example, maximum RMS values, or ratios of maximum RMS values) that may be used by the processor (120) to set activation thresholds and/or to normalize signals provided by the sensors (110 and/or 140) for the specific user.
While the various embodiments described herein provide systems, devices, and methods of gesture identification, in some applications it is also important to identify when a user has stopped performing (i.e., “released”) a gesture. For example, in an application in which the performance of a gesture effects and action and that action is maintained throughout the performance of the gesture, it can be important to identify when the user has stopped performing the gesture in order to stop effecting the controlled action. The present systems, devices, and methods may identify when a user has stopped performing a gesture based on, for example, all signal channels falling below the activation threshold, or the ongoing sequence of window classes ceasing to correspond to the identified gesture.
The signals that are detected and provided by the sensors (110 and/or 140) of a gesture identification system (100) when a user performs a gesture may not be identical each time the same gesture is performed, even by the same user. Discrepancies between different instances of the same gesture may result from variations in many different use parameters, including but not limited to: signal noise, discrepancies in how the gesture is performed, shifts or variations in the orientation and/or position of a wearable device (100) during or between gestures, a different user performing the same gesture, muscle fatigue, a change in environmental or skin conditions, etc. The various embodiments described herein provide systems, devices, and methods for operating a gesture identification system (100) to identify a gesture (or gestures) performed by a user with improved robustness against such variations in use parameters. Improved robustness is achieved, at least in part, by segmenting sensor data into discrete data windows, assigning a respective “window class” to each data window, and identifying the user-performed gesture based on the resulting sequence of window classes. Furthermore, the data stored and processed in the various embodiments described herein is relatively small (in terms of system memory required for storage) and the calculations and other acts involved in processing said data can readily be executed by a relatively low-power, low-performance processor.
The various embodiments described herein may be implemented as an alternative to, or in combination with, the system, articles, and methods for gesture identification described in U.S. Provisional Patent Application Ser. No. 61/881,064 (now US Patent Publication 2015-0084860) and/or U.S. Provisional Patent Application Ser. No. 61/894,263 (now US Patent Publication 2015-0109202), both of which are incorporated by reference herein in their entirety.
Throughout this specification and the appended claims, infinitive verb forms are often used. Examples include, without limitation: “to detect,” “to provide,” “to transmit,” “to communicate,” “to process,” “to route,” and the like. Unless the specific context requires otherwise, such infinitive verb forms are used in an open, inclusive sense, that is as “to, at least, detect,” to, at least, provide,” “to, at least, transmit,” and so on.
The above description of illustrated embodiments, including what is described in the Abstract, is not intended to be exhaustive or to limit the embodiments to the precise forms disclosed. Although specific embodiments of and examples are described herein for illustrative purposes, various equivalent modifications can be made without departing from the spirit and scope of the disclosure, as will be recognized by those skilled in the relevant art. The teachings provided herein of the various embodiments can be applied to other portable and/or wearable electronic devices, not necessarily the exemplary wearable electronic devices generally described above.
For instance, the foregoing detailed description has set forth various embodiments of the devices and/or processes via the use of block diagrams, schematics, and examples. Insofar as such block diagrams, schematics, and examples contain one or more functions and/or operations, it will be understood by those skilled in the art that each function and/or operation within such block diagrams, flowcharts, or examples can be implemented, individually and/or collectively, by a wide range of hardware, software, firmware, or virtually any combination thereof. In one embodiment, the present subject matter may be implemented via Application Specific Integrated Circuits (ASICs). However, those skilled in the art will recognize that the embodiments disclosed herein, in whole or in part, can be equivalently implemented in standard integrated circuits, as one or more computer programs executed by one or more computers (e.g., as one or more programs running on one or more computer systems), as one or more programs executed by on one or more controllers (e.g., microcontrollers) as one or more programs executed by one or more processors (e.g., microprocessors, central processing units, graphical processing units), as firmware, or as virtually any combination thereof, and that designing the circuitry and/or writing the code for the software and or firmware would be well within the skill of one of ordinary skill in the art in light of the teachings of this disclosure.
When logic is implemented as software and stored in memory, logic or information can be stored on any processor-readable medium for use by or in connection with any processor-related system or method. In the context of this disclosure, a memory is a processor-readable medium that is an electronic, magnetic, optical, or other physical device or means that contains or stores a computer and/or processor program. Logic and/or the information can be embodied in any processor-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions associated with logic and/or information.
In the context of this specification, a “non-transitory processor-readable medium” can be any element that can store the program associated with logic and/or information for use by or in connection with the instruction execution system, apparatus, and/or device. The processor-readable medium can be, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device. More specific examples (a non-exhaustive list) of the computer readable medium would include the following: a portable computer diskette (magnetic, compact flash card, secure digital, or the like), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM, EEPROM, or Flash memory), a portable compact disc read-only memory (CDROM), digital tape, and other non-transitory media.
The various embodiments described above can be combined to provide further embodiments. To the extent that they are not inconsistent with the specific teachings and definitions herein, all of the U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, including but not limited to: U.S. Non-Provisional patent application Ser. No. 14/737,081; U.S. Provisional Patent Application Ser. No. 62/014,605; U.S. Non-Provisional patent application Ser. No. 14/186,878 (now US Patent Publication 2014-0240223); U.S. Non-Provisional patent application Ser. No. 14/186,889 (now US Patent Publication 2014-0240103); U.S. Non-Provisional patent application Ser. No. 14/276,575 (now US Patent Publication 2014-0334083); U.S. Provisional Patent Application Ser. No. 61/869,526 (now US Patent Publication 2015-0057770); U.S. Provisional Application Ser. No. 61/881,064 (now US Patent Publication 2015-0084860); U.S. Provisional Application Ser. No. 61/894,263 (now US Patent Publication 2015-0109202); U.S. Provisional Patent Application Ser. No. 61/909,786 (now U.S. Non-Provisional patent application Ser. No. 14/553,657); U.S. Provisional Patent Application Ser. No. 61/915,338 (now U.S. Non-Provisional patent application Ser. No. 14/567,826); U.S. Provisional Patent Applications Ser. No. 61/940,048 (now U.S. Non-Provisional patent application Ser. No. 14/621,044); and U.S. Provisional Patent Application Ser. No. 61/971,346 (now U.S. Non-Provisional patent application Ser. No. 14/669,878); are incorporated herein by reference, in their entirety. Aspects of the embodiments can be modified, if necessary, to employ systems, circuits and concepts of the various patents, applications and publications to provide yet further embodiments.
These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
1411995 | Dull | Apr 1922 | A |
3408133 | Lee | Oct 1968 | A |
3620208 | Higley et al. | Nov 1971 | A |
3712716 | Cornsweet et al. | Jan 1973 | A |
3880146 | Everett et al. | Apr 1975 | A |
4055168 | Miller et al. | Oct 1977 | A |
4602639 | Hoogendoorn et al. | Jul 1986 | A |
4705408 | Jordi | Nov 1987 | A |
4817064 | Milles | Mar 1989 | A |
4896120 | Kamil | Jan 1990 | A |
4978213 | El Hage | Dec 1990 | A |
5003978 | Dunseath, Jr. | Apr 1991 | A |
D322227 | Warhol | Dec 1991 | S |
5081852 | Cox | Jan 1992 | A |
5103323 | Magarinos et al. | Apr 1992 | A |
5231674 | Cleveland et al. | Jul 1993 | A |
5251189 | Thorp | Oct 1993 | A |
D348660 | Parsons | Jul 1994 | S |
5445869 | Ishikawa et al. | Aug 1995 | A |
5467104 | Furness, III et al. | Nov 1995 | A |
5482051 | Reddy et al. | Jan 1996 | A |
5589956 | Morishima et al. | Dec 1996 | A |
5596339 | Furness, III et al. | Jan 1997 | A |
5605059 | Woodward | Feb 1997 | A |
5625577 | Kunii et al. | Apr 1997 | A |
5683404 | Johnson | Nov 1997 | A |
5742421 | Wells et al. | Apr 1998 | A |
6005548 | Latypov et al. | Dec 1999 | A |
6008781 | Furness, III et al. | Dec 1999 | A |
6009210 | Kang | Dec 1999 | A |
6027216 | Guyton et al. | Feb 2000 | A |
6032530 | Hock | Mar 2000 | A |
D422617 | Simioni | Apr 2000 | S |
6184847 | Fateh et al. | Feb 2001 | B1 |
6236476 | Son et al. | May 2001 | B1 |
6238338 | DeLuca et al. | May 2001 | B1 |
6244873 | Hill et al. | Jun 2001 | B1 |
6317103 | Furness, III et al. | Nov 2001 | B1 |
6377277 | Yamamoto | Apr 2002 | B1 |
D459352 | Giovanniello | Jun 2002 | S |
6411843 | Zarychta | Jun 2002 | B1 |
6487906 | Hock | Dec 2002 | B1 |
6510333 | Licata et al. | Jan 2003 | B1 |
6527711 | Stivoric et al. | Mar 2003 | B1 |
6619836 | Silvant et al. | Sep 2003 | B1 |
6639570 | Furness, III et al. | Oct 2003 | B2 |
6658287 | Litt et al. | Dec 2003 | B1 |
6720984 | Jorgensen et al. | Apr 2004 | B1 |
6743982 | Biegelsen et al. | Jun 2004 | B2 |
6774885 | Even-Zohar | Aug 2004 | B1 |
6807438 | Brun del Re et al. | Oct 2004 | B1 |
D502661 | Rapport | Mar 2005 | S |
D502662 | Rapport | Mar 2005 | S |
6865409 | Getsla et al. | Mar 2005 | B2 |
D503646 | Rapport | Apr 2005 | S |
6880364 | Vidolin et al. | Apr 2005 | B1 |
6927343 | Watanabe et al. | Aug 2005 | B2 |
6942621 | Avinash et al. | Sep 2005 | B2 |
6965842 | Rekimoto | Nov 2005 | B2 |
6972734 | Ohshima et al. | Dec 2005 | B1 |
6984208 | Zheng | Jan 2006 | B2 |
7022919 | Brist et al. | Apr 2006 | B2 |
7028507 | Rapport | Apr 2006 | B2 |
7086218 | Pasach | Aug 2006 | B1 |
7089148 | Bachmann et al. | Aug 2006 | B1 |
D535401 | Travis et al. | Jan 2007 | S |
7173437 | Hervieux et al. | Feb 2007 | B2 |
7209114 | Radley-Smith | Apr 2007 | B2 |
D543212 | Marks | May 2007 | S |
7265298 | Maghribi et al. | Sep 2007 | B2 |
7271774 | Puuri | Sep 2007 | B2 |
7333090 | Tanaka et al. | Feb 2008 | B2 |
7351975 | Brady et al. | Apr 2008 | B2 |
7450107 | Radley-Smith | Nov 2008 | B2 |
7473888 | Wine et al. | Jan 2009 | B2 |
7491892 | Wagner et al. | Feb 2009 | B2 |
7517725 | Reis | Apr 2009 | B2 |
7558622 | Tran | Jul 2009 | B2 |
7574253 | Edney et al. | Aug 2009 | B2 |
7580742 | Tan et al. | Aug 2009 | B2 |
7596393 | Jung et al. | Sep 2009 | B2 |
7618260 | Daniel et al. | Nov 2009 | B2 |
7636549 | Ma et al. | Dec 2009 | B2 |
7640007 | Chen et al. | Dec 2009 | B2 |
7660126 | Cho et al. | Feb 2010 | B2 |
7684105 | Lamontagne et al. | Mar 2010 | B2 |
7747113 | Mukawa et al. | Jun 2010 | B2 |
7773111 | Cleveland et al. | Aug 2010 | B2 |
7787946 | Stahmann et al. | Aug 2010 | B2 |
7805386 | Greer | Sep 2010 | B2 |
7809435 | Ettare et al. | Oct 2010 | B1 |
7844310 | Anderson | Nov 2010 | B2 |
D628616 | Yuan | Dec 2010 | S |
7850306 | Uusitalo et al. | Dec 2010 | B2 |
7870211 | Pascal et al. | Jan 2011 | B2 |
D633939 | Puentes et al. | Mar 2011 | S |
D634771 | Fuchs | Mar 2011 | S |
7901368 | Flaherty et al. | Mar 2011 | B2 |
7925100 | Howell et al. | Apr 2011 | B2 |
7948763 | Chuang | May 2011 | B2 |
D640314 | Yang | Jun 2011 | S |
D643428 | Janky et al. | Aug 2011 | S |
D646192 | Woode | Oct 2011 | S |
D649177 | Cho et al. | Nov 2011 | S |
8054061 | Prance et al. | Nov 2011 | B2 |
D654622 | Hsu | Feb 2012 | S |
8120828 | Schwerdtner | Feb 2012 | B2 |
8170656 | Tan et al. | May 2012 | B2 |
8179604 | Prada Gomez et al. | May 2012 | B1 |
8188937 | Amafuji et al. | May 2012 | B1 |
8190249 | Gharieb et al. | May 2012 | B1 |
D661613 | Demeglio | Jun 2012 | S |
8203502 | Chi et al. | Jun 2012 | B1 |
8207473 | Axisa et al. | Jun 2012 | B2 |
8212859 | Tang et al. | Jul 2012 | B2 |
D667482 | Healy et al. | Sep 2012 | S |
D669522 | Klinar et al. | Oct 2012 | S |
D669523 | Wakata et al. | Oct 2012 | S |
D671590 | Klinar et al. | Nov 2012 | S |
8311623 | Sanger | Nov 2012 | B2 |
8351651 | Lee | Jan 2013 | B2 |
8355671 | Kramer et al. | Jan 2013 | B2 |
8389862 | Arora et al. | Mar 2013 | B2 |
8421634 | Tan et al. | Apr 2013 | B2 |
8427977 | Workman et al. | Apr 2013 | B2 |
D682343 | Waters | May 2013 | S |
D682727 | Bulgari | May 2013 | S |
8435191 | Barboutis et al. | May 2013 | B2 |
8437844 | Syed Momen et al. | May 2013 | B2 |
8447704 | Tan et al. | May 2013 | B2 |
D685019 | Li | Jun 2013 | S |
8467270 | Gossweiler, III et al. | Jun 2013 | B2 |
8469741 | Oster et al. | Jun 2013 | B2 |
D687087 | Iurilli | Jul 2013 | S |
8484022 | Vanhoucke | Jul 2013 | B1 |
D689862 | Liu | Sep 2013 | S |
D692941 | Klinar et al. | Nov 2013 | S |
8591411 | Banet et al. | Nov 2013 | B2 |
D695333 | Farnam et al. | Dec 2013 | S |
D695454 | Moore | Dec 2013 | S |
8620361 | Bailey et al. | Dec 2013 | B2 |
8624124 | Koo et al. | Jan 2014 | B2 |
8634119 | Bablumyan et al. | Jan 2014 | B2 |
D701555 | Markovitz et al. | Mar 2014 | S |
8666212 | Amirparviz | Mar 2014 | B1 |
8702629 | Giuffrida et al. | Apr 2014 | B2 |
8704882 | Turner | Apr 2014 | B2 |
D704248 | DiChiara | May 2014 | S |
8718980 | Garudadri et al. | May 2014 | B2 |
8744543 | Li et al. | Jun 2014 | B2 |
8754862 | Zaliva | Jun 2014 | B2 |
8777668 | Ikeda et al. | Jul 2014 | B2 |
D716457 | Brefka et al. | Oct 2014 | S |
D717685 | Bailey et al. | Nov 2014 | S |
8879276 | Wang | Nov 2014 | B2 |
8880163 | Barachant et al. | Nov 2014 | B2 |
8883287 | Boyce et al. | Nov 2014 | B2 |
8890875 | Jammes et al. | Nov 2014 | B2 |
8892479 | Tan et al. | Nov 2014 | B2 |
8895865 | Lenahan et al. | Nov 2014 | B2 |
D719568 | Heinrich et al. | Dec 2014 | S |
D719570 | Heinrich et al. | Dec 2014 | S |
8912094 | Koo et al. | Dec 2014 | B2 |
8922481 | Kauffmann et al. | Dec 2014 | B1 |
D723093 | Li | Feb 2015 | S |
8954135 | Yuen et al. | Feb 2015 | B2 |
D724647 | Rohrbach | Mar 2015 | S |
8970571 | Wong et al. | Mar 2015 | B1 |
8971023 | Olsson et al. | Mar 2015 | B2 |
9018532 | Wesselmann et al. | Apr 2015 | B2 |
9037530 | Tan et al. | May 2015 | B2 |
9086687 | Park et al. | Jul 2015 | B2 |
9092664 | Forutanpour et al. | Jul 2015 | B2 |
D736664 | Paradise et al. | Aug 2015 | S |
D738373 | Davies et al. | Sep 2015 | S |
9135708 | Ebisawa | Sep 2015 | B2 |
9146730 | Lazar | Sep 2015 | B2 |
D741855 | Park et al. | Oct 2015 | S |
9170674 | Forutanpour et al. | Oct 2015 | B2 |
D742272 | Bailey et al. | Nov 2015 | S |
D742874 | Cheng et al. | Nov 2015 | S |
D743963 | Osterhout | Nov 2015 | S |
9211417 | Heldman et al. | Dec 2015 | B2 |
9218574 | Phillipps et al. | Dec 2015 | B2 |
D747714 | Erbeus | Jan 2016 | S |
D747759 | Ho | Jan 2016 | S |
9235934 | Mandella et al. | Jan 2016 | B2 |
9240069 | Li | Jan 2016 | B1 |
D750623 | Park et al. | Mar 2016 | S |
D751065 | Magi | Mar 2016 | S |
9278453 | Assad | Mar 2016 | B2 |
9299248 | Lake et al. | Mar 2016 | B2 |
D756359 | Bailey et al. | May 2016 | S |
9349280 | Baldwin | May 2016 | B2 |
D758476 | Ho | Jun 2016 | S |
D760313 | Ho et al. | Jun 2016 | S |
9367139 | Ataee et al. | Jun 2016 | B2 |
9372535 | Bailey et al. | Jun 2016 | B2 |
9389694 | Ataee et al. | Jul 2016 | B2 |
9393418 | Giuffrida et al. | Jul 2016 | B2 |
9408316 | Bailey et al. | Aug 2016 | B2 |
9418927 | Axisa et al. | Aug 2016 | B2 |
D766895 | Choi | Sep 2016 | S |
9439566 | Arne et al. | Sep 2016 | B2 |
D768627 | Rochat et al. | Oct 2016 | S |
9459697 | Bedikian et al. | Oct 2016 | B2 |
9472956 | Michaelis et al. | Oct 2016 | B2 |
9477313 | Mistry et al. | Oct 2016 | B2 |
D771735 | Lee et al. | Nov 2016 | S |
9483123 | Aleem et al. | Nov 2016 | B2 |
9529434 | Choi et al. | Dec 2016 | B2 |
D780828 | Bonaventura et al. | Mar 2017 | S |
D780829 | Bonaventura et al. | Mar 2017 | S |
9597015 | McNames et al. | Mar 2017 | B2 |
9600030 | Bailey et al. | Mar 2017 | B2 |
9612661 | Wagner et al. | Apr 2017 | B2 |
9613262 | Holz | Apr 2017 | B2 |
9654477 | Kotamraju | May 2017 | B1 |
9659403 | Horowitz | May 2017 | B1 |
9687168 | John | Jun 2017 | B2 |
9696795 | Marcolina et al. | Jul 2017 | B2 |
9720515 | Wagner et al. | Aug 2017 | B2 |
9741169 | Holz | Aug 2017 | B1 |
9766709 | Holz | Sep 2017 | B2 |
9785247 | Horowitz et al. | Oct 2017 | B1 |
9788789 | Bailey | Oct 2017 | B2 |
9807221 | Bailey et al. | Oct 2017 | B2 |
9864431 | Keskin et al. | Jan 2018 | B2 |
9867548 | Le et al. | Jan 2018 | B2 |
9880632 | Ataee et al. | Jan 2018 | B2 |
9891718 | Connor | Feb 2018 | B2 |
10042422 | Morun et al. | Aug 2018 | B2 |
10070799 | Ang et al. | Sep 2018 | B2 |
10078435 | Noel | Sep 2018 | B2 |
10101809 | Morun et al. | Oct 2018 | B2 |
10152082 | Bailey | Dec 2018 | B2 |
10188309 | Morun et al. | Jan 2019 | B2 |
10199008 | Aleem et al. | Feb 2019 | B2 |
10203751 | Keskin et al. | Feb 2019 | B2 |
10216274 | Chapeskie et al. | Feb 2019 | B2 |
10251577 | Morun et al. | Apr 2019 | B2 |
10310601 | Morun et al. | Jun 2019 | B2 |
10331210 | Morun et al. | Jun 2019 | B2 |
10362958 | Morun et al. | Jul 2019 | B2 |
10409371 | Kaifosh et al. | Sep 2019 | B2 |
10429928 | Morun et al. | Oct 2019 | B2 |
10460455 | Giurgica-Tiron et al. | Oct 2019 | B2 |
20010033402 | Popovich | Oct 2001 | A1 |
20020003627 | Rieder | Jan 2002 | A1 |
20020009972 | Amento | Jan 2002 | A1 |
20020030636 | Richards | Mar 2002 | A1 |
20020032386 | Sackner et al. | Mar 2002 | A1 |
20020077534 | DuRousseau | Jun 2002 | A1 |
20020120415 | Millott | Aug 2002 | A1 |
20020120916 | Snider, Jr. | Aug 2002 | A1 |
20030030595 | Radley-Smith | Feb 2003 | A1 |
20030036691 | Stanaland et al. | Feb 2003 | A1 |
20030051505 | Robertson et al. | Mar 2003 | A1 |
20030144586 | Tsubata | Jul 2003 | A1 |
20030144829 | Geatz et al. | Jul 2003 | A1 |
20030171921 | Manabe et al. | Sep 2003 | A1 |
20030184544 | Prudent | Oct 2003 | A1 |
20040024312 | Zheng | Feb 2004 | A1 |
20040068409 | Tanaka et al. | Apr 2004 | A1 |
20040073104 | Brun del Re et al. | Apr 2004 | A1 |
20040092839 | Shin et al. | May 2004 | A1 |
20040194500 | Rapport | Oct 2004 | A1 |
20040210165 | Marmaropoulos et al. | Oct 2004 | A1 |
20050005637 | Rapport | Jan 2005 | A1 |
20050012715 | Ford | Jan 2005 | A1 |
20050070227 | Shen et al. | Mar 2005 | A1 |
20050119701 | Lauter et al. | Jun 2005 | A1 |
20050177038 | Kolpin et al. | Aug 2005 | A1 |
20050179644 | Alsio et al. | Aug 2005 | A1 |
20060037359 | Stinespring | Feb 2006 | A1 |
20060061544 | Min et al. | Mar 2006 | A1 |
20060121958 | Jung | Jun 2006 | A1 |
20060132705 | Li | Jun 2006 | A1 |
20060238707 | Elvesjo et al. | Oct 2006 | A1 |
20070009151 | Pittman et al. | Jan 2007 | A1 |
20070078308 | Daly | Apr 2007 | A1 |
20070132785 | Ebersole, Jr. et al. | Jun 2007 | A1 |
20070172797 | Hada et al. | Jul 2007 | A1 |
20070177770 | Derchak et al. | Aug 2007 | A1 |
20070256494 | Nakamura et al. | Nov 2007 | A1 |
20070276270 | Tran | Nov 2007 | A1 |
20070279852 | Daniel | Dec 2007 | A1 |
20070285399 | Lund | Dec 2007 | A1 |
20080001735 | Tran | Jan 2008 | A1 |
20080032638 | Anderson | Feb 2008 | A1 |
20080052643 | Ike et al. | Feb 2008 | A1 |
20080136775 | Conant | Jun 2008 | A1 |
20080214360 | Stirling et al. | Sep 2008 | A1 |
20080221487 | Zohar et al. | Sep 2008 | A1 |
20090007597 | Hanevold | Jan 2009 | A1 |
20090031757 | Harding | Feb 2009 | A1 |
20090040016 | Ikeda | Feb 2009 | A1 |
20090051544 | Niknejad | Feb 2009 | A1 |
20090082692 | Hale et al. | Mar 2009 | A1 |
20090082701 | Zohar et al. | Mar 2009 | A1 |
20090085864 | Kutliroff | Apr 2009 | A1 |
20090102580 | Uchaykin | Apr 2009 | A1 |
20090109241 | Tsujimoto | Apr 2009 | A1 |
20090112080 | Matthews | Apr 2009 | A1 |
20090124881 | Rytky | May 2009 | A1 |
20090147004 | Ramon et al. | Jun 2009 | A1 |
20090179824 | Tsujimoto et al. | Jul 2009 | A1 |
20090189867 | Krah et al. | Jul 2009 | A1 |
20090207464 | Wiltshire et al. | Aug 2009 | A1 |
20090251407 | Flake et al. | Oct 2009 | A1 |
20090258669 | Nie et al. | Oct 2009 | A1 |
20090318785 | Ishikawa et al. | Dec 2009 | A1 |
20090322653 | Putilin et al. | Dec 2009 | A1 |
20090326406 | Tan et al. | Dec 2009 | A1 |
20090327171 | Tan | Dec 2009 | A1 |
20100030532 | Arora et al. | Feb 2010 | A1 |
20100041974 | Ting et al. | Feb 2010 | A1 |
20100063794 | Hernandez-Rebollar | Mar 2010 | A1 |
20100066664 | Son | Mar 2010 | A1 |
20100106044 | Linderman | Apr 2010 | A1 |
20100113910 | Brauers et al. | May 2010 | A1 |
20100142015 | Kuwahara et al. | Jun 2010 | A1 |
20100149073 | Chaum et al. | Jun 2010 | A1 |
20100150415 | Atkinson et al. | Jun 2010 | A1 |
20100280628 | Sankai | Nov 2010 | A1 |
20100292617 | Lei et al. | Nov 2010 | A1 |
20100293115 | Seyed Momen | Nov 2010 | A1 |
20100315266 | Gunawardana et al. | Dec 2010 | A1 |
20100317958 | Beck et al. | Dec 2010 | A1 |
20110018754 | Tojima et al. | Jan 2011 | A1 |
20110025982 | Takahashi | Feb 2011 | A1 |
20110054360 | Son et al. | Mar 2011 | A1 |
20110065319 | Oster et al. | Mar 2011 | A1 |
20110072510 | Cheswick | Mar 2011 | A1 |
20110077484 | Van Slyke et al. | Mar 2011 | A1 |
20110092826 | Lee et al. | Apr 2011 | A1 |
20110133934 | Tan | Jun 2011 | A1 |
20110134026 | Kang et al. | Jun 2011 | A1 |
20110166434 | Gargiulo | Jul 2011 | A1 |
20110172503 | Knepper et al. | Jul 2011 | A1 |
20110173204 | Murillo et al. | Jul 2011 | A1 |
20110181527 | Capela et al. | Jul 2011 | A1 |
20110213278 | Horak et al. | Sep 2011 | A1 |
20110224507 | Banet et al. | Sep 2011 | A1 |
20110224556 | Moon et al. | Sep 2011 | A1 |
20110224564 | Moon et al. | Sep 2011 | A1 |
20120002256 | Lacoste et al. | Jan 2012 | A1 |
20120029322 | Wartena et al. | Feb 2012 | A1 |
20120051005 | Vanfleteren et al. | Mar 2012 | A1 |
20120052268 | Axisa et al. | Mar 2012 | A1 |
20120053439 | Ylostalo et al. | Mar 2012 | A1 |
20120066163 | Balls et al. | Mar 2012 | A1 |
20120071092 | Pasquero et al. | Mar 2012 | A1 |
20120101357 | Hoskuldsson et al. | Apr 2012 | A1 |
20120139817 | Freeman | Jun 2012 | A1 |
20120157789 | Kangas et al. | Jun 2012 | A1 |
20120157886 | Tenn et al. | Jun 2012 | A1 |
20120165695 | Kidmose et al. | Jun 2012 | A1 |
20120182309 | Griffin et al. | Jul 2012 | A1 |
20120188158 | Tan et al. | Jul 2012 | A1 |
20120203076 | Fatta et al. | Aug 2012 | A1 |
20120209134 | Morita et al. | Aug 2012 | A1 |
20120226130 | De Graff et al. | Sep 2012 | A1 |
20120249797 | Haddick et al. | Oct 2012 | A1 |
20120265090 | Fink et al. | Oct 2012 | A1 |
20120265480 | Oshima | Oct 2012 | A1 |
20120275621 | Elko | Nov 2012 | A1 |
20120283526 | Gommesen et al. | Nov 2012 | A1 |
20120283896 | Persaud et al. | Nov 2012 | A1 |
20120293548 | Perez et al. | Nov 2012 | A1 |
20120302858 | Kidmose et al. | Nov 2012 | A1 |
20120320532 | Wang | Dec 2012 | A1 |
20120323521 | De Foras | Dec 2012 | A1 |
20130004033 | Trugenberger | Jan 2013 | A1 |
20130005303 | Song et al. | Jan 2013 | A1 |
20130016292 | Miao et al. | Jan 2013 | A1 |
20130016413 | Saeedi et al. | Jan 2013 | A1 |
20130020948 | Han et al. | Jan 2013 | A1 |
20130027341 | Mastandrea | Jan 2013 | A1 |
20130077820 | Marais et al. | Mar 2013 | A1 |
20130080794 | Hsieh | Mar 2013 | A1 |
20130123666 | Giuffrida et al. | May 2013 | A1 |
20130127708 | Jung et al. | May 2013 | A1 |
20130135722 | Yokoyama | May 2013 | A1 |
20130141375 | Ludwig et al. | Jun 2013 | A1 |
20130165813 | Chang et al. | Jun 2013 | A1 |
20130191741 | Dickinson et al. | Jul 2013 | A1 |
20130198694 | Rahman et al. | Aug 2013 | A1 |
20130207889 | Chang et al. | Aug 2013 | A1 |
20130215235 | Russell | Aug 2013 | A1 |
20130217998 | Mahfouz et al. | Aug 2013 | A1 |
20130222384 | Futterer | Aug 2013 | A1 |
20130232095 | Tan et al. | Sep 2013 | A1 |
20130265229 | Forutanpour et al. | Oct 2013 | A1 |
20130265437 | Thörn et al. | Oct 2013 | A1 |
20130271292 | McDermott | Oct 2013 | A1 |
20130285901 | Lee et al. | Oct 2013 | A1 |
20130293580 | Spivack | Nov 2013 | A1 |
20130312256 | Wesselmann et al. | Nov 2013 | A1 |
20130317382 | Le | Nov 2013 | A1 |
20130317648 | Assad | Nov 2013 | A1 |
20130332196 | Pinsker | Dec 2013 | A1 |
20130335302 | Crane et al. | Dec 2013 | A1 |
20140005743 | Giuffrida et al. | Jan 2014 | A1 |
20140020945 | Hurwitz et al. | Jan 2014 | A1 |
20140028539 | Newham et al. | Jan 2014 | A1 |
20140028546 | Jeon et al. | Jan 2014 | A1 |
20140045547 | Singamsetty et al. | Feb 2014 | A1 |
20140049417 | Abdurrahman et al. | Feb 2014 | A1 |
20140051946 | Arne et al. | Feb 2014 | A1 |
20140052150 | Taylor et al. | Feb 2014 | A1 |
20140074179 | Heldman et al. | Mar 2014 | A1 |
20140092009 | Yen et al. | Apr 2014 | A1 |
20140094675 | Luna et al. | Apr 2014 | A1 |
20140098018 | Kim et al. | Apr 2014 | A1 |
20140107493 | Yuen et al. | Apr 2014 | A1 |
20140121471 | Walker | May 2014 | A1 |
20140122958 | Greenebrg et al. | May 2014 | A1 |
20140132512 | Gomez Sainz-Garcia | May 2014 | A1 |
20140139422 | Mistry et al. | May 2014 | A1 |
20140157168 | Albouyeh et al. | Jun 2014 | A1 |
20140194062 | Palin et al. | Jul 2014 | A1 |
20140196131 | Lee | Jul 2014 | A1 |
20140198034 | Bailey et al. | Jul 2014 | A1 |
20140198035 | Bailey et al. | Jul 2014 | A1 |
20140198944 | Forutanpour et al. | Jul 2014 | A1 |
20140202643 | Hikmet et al. | Jul 2014 | A1 |
20140204455 | Popovich et al. | Jul 2014 | A1 |
20140223462 | Aimone et al. | Aug 2014 | A1 |
20140226193 | Sun | Aug 2014 | A1 |
20140232651 | Kress et al. | Aug 2014 | A1 |
20140236031 | Banet et al. | Aug 2014 | A1 |
20140240103 | Lake et al. | Aug 2014 | A1 |
20140240223 | Lake et al. | Aug 2014 | A1 |
20140245200 | Holz | Aug 2014 | A1 |
20140249397 | Lake et al. | Sep 2014 | A1 |
20140257141 | Giuffrida et al. | Sep 2014 | A1 |
20140258864 | Shenoy et al. | Sep 2014 | A1 |
20140278441 | Ton et al. | Sep 2014 | A1 |
20140285326 | Luna et al. | Sep 2014 | A1 |
20140285429 | Simmons | Sep 2014 | A1 |
20140297528 | Agrawal et al. | Oct 2014 | A1 |
20140299362 | Park et al. | Oct 2014 | A1 |
20140304665 | Holz | Oct 2014 | A1 |
20140334083 | Bailey | Nov 2014 | A1 |
20140334653 | Luna et al. | Nov 2014 | A1 |
20140337861 | Chang et al. | Nov 2014 | A1 |
20140340857 | Hsu et al. | Nov 2014 | A1 |
20140344731 | Holz | Nov 2014 | A1 |
20140349257 | Connor | Nov 2014 | A1 |
20140354528 | Laughlin et al. | Dec 2014 | A1 |
20140354529 | Laughlin et al. | Dec 2014 | A1 |
20140355825 | Kim et al. | Dec 2014 | A1 |
20140364703 | Kim et al. | Dec 2014 | A1 |
20140365163 | Jallon | Dec 2014 | A1 |
20140368424 | Choi et al. | Dec 2014 | A1 |
20140368896 | Nakazono et al. | Dec 2014 | A1 |
20140375465 | Fenuccio et al. | Dec 2014 | A1 |
20140376773 | Holz | Dec 2014 | A1 |
20150006120 | Sett et al. | Jan 2015 | A1 |
20150010203 | Muninder et al. | Jan 2015 | A1 |
20150011857 | Henson et al. | Jan 2015 | A1 |
20150025355 | Bailey et al. | Jan 2015 | A1 |
20150029092 | Holz et al. | Jan 2015 | A1 |
20150035827 | Yamaoka et al. | Feb 2015 | A1 |
20150036221 | Stephenson | Feb 2015 | A1 |
20150045689 | Barone | Feb 2015 | A1 |
20150045699 | Mokaya et al. | Feb 2015 | A1 |
20150051470 | Bailey et al. | Feb 2015 | A1 |
20150057506 | Luna et al. | Feb 2015 | A1 |
20150057770 | Bailey et al. | Feb 2015 | A1 |
20150065840 | Bailey | Mar 2015 | A1 |
20150070270 | Bailey et al. | Mar 2015 | A1 |
20150070274 | Morozov | Mar 2015 | A1 |
20150084860 | Aleem et al. | Mar 2015 | A1 |
20150106052 | Balakrishnan | Apr 2015 | A1 |
20150109202 | Ataee et al. | Apr 2015 | A1 |
20150124566 | Lake et al. | May 2015 | A1 |
20150128094 | Baldwin et al. | May 2015 | A1 |
20150141784 | Morun et al. | May 2015 | A1 |
20150148641 | Morun et al. | May 2015 | A1 |
20150157944 | Gottlieb | Jun 2015 | A1 |
20150160621 | Yilmaz | Jun 2015 | A1 |
20150169074 | Ataee | Jun 2015 | A1 |
20150182113 | Utter, II | Jul 2015 | A1 |
20150182130 | Utter, II | Jul 2015 | A1 |
20150182163 | Utter | Jul 2015 | A1 |
20150182164 | Utter, II | Jul 2015 | A1 |
20150185838 | Camacho-Perez et al. | Jul 2015 | A1 |
20150186609 | Utter, II | Jul 2015 | A1 |
20150193949 | Katz et al. | Jul 2015 | A1 |
20150205126 | Schowengerdt | Jul 2015 | A1 |
20150205134 | Bailey et al. | Jul 2015 | A1 |
20150216475 | Luna et al. | Aug 2015 | A1 |
20150223716 | Korkala et al. | Aug 2015 | A1 |
20150230756 | Luna et al. | Aug 2015 | A1 |
20150234426 | Bailey et al. | Aug 2015 | A1 |
20150237716 | Su et al. | Aug 2015 | A1 |
20150242120 | Rodriguez | Aug 2015 | A1 |
20150261306 | Lake | Sep 2015 | A1 |
20150261318 | Scavezze et al. | Sep 2015 | A1 |
20150277575 | Ataee et al. | Oct 2015 | A1 |
20150296553 | DiFranco et al. | Oct 2015 | A1 |
20150302168 | De Sapio et al. | Oct 2015 | A1 |
20150309563 | Connor | Oct 2015 | A1 |
20150309582 | Gupta | Oct 2015 | A1 |
20150310766 | Alshehri et al. | Oct 2015 | A1 |
20150313496 | Connor | Nov 2015 | A1 |
20150325202 | Lake et al. | Nov 2015 | A1 |
20150346701 | Gordon et al. | Dec 2015 | A1 |
20150362734 | Moser et al. | Dec 2015 | A1 |
20150366504 | Connor | Dec 2015 | A1 |
20150370326 | Chapeskie et al. | Dec 2015 | A1 |
20150370333 | Ataee et al. | Dec 2015 | A1 |
20150378161 | Bailey et al. | Dec 2015 | A1 |
20150378162 | Bailey et al. | Dec 2015 | A1 |
20150378164 | Bailey et al. | Dec 2015 | A1 |
20160011668 | Gilad-Bachrach et al. | Jan 2016 | A1 |
20160020500 | Matsuda | Jan 2016 | A1 |
20160033771 | Tremblay et al. | Feb 2016 | A1 |
20160049073 | Lee | Feb 2016 | A1 |
20160144172 | Hsueh et al. | May 2016 | A1 |
20160150636 | Otsubo | May 2016 | A1 |
20160156762 | Bailey et al. | Jun 2016 | A1 |
20160162604 | Xioli et al. | Jun 2016 | A1 |
20160187992 | Yamamoto et al. | Jun 2016 | A1 |
20160199699 | Klassen | Jul 2016 | A1 |
20160202081 | Debieuvre et al. | Jul 2016 | A1 |
20160235323 | Tadi et al. | Aug 2016 | A1 |
20160238845 | Alexander et al. | Aug 2016 | A1 |
20160239080 | Marcolina et al. | Aug 2016 | A1 |
20160246384 | Mullins et al. | Aug 2016 | A1 |
20160262687 | Imperial | Sep 2016 | A1 |
20160274365 | Bailey et al. | Sep 2016 | A1 |
20160274758 | Bailey | Sep 2016 | A1 |
20160292497 | Kehtarnavaz et al. | Oct 2016 | A1 |
20160309249 | Wu et al. | Oct 2016 | A1 |
20160313798 | Connor | Oct 2016 | A1 |
20160313801 | Wagner et al. | Oct 2016 | A1 |
20160313899 | Noel | Oct 2016 | A1 |
20160327796 | Bailey et al. | Nov 2016 | A1 |
20160327797 | Bailey et al. | Nov 2016 | A1 |
20160349514 | Alexander et al. | Dec 2016 | A1 |
20160349515 | Alexander et al. | Dec 2016 | A1 |
20160349516 | Alexander et al. | Dec 2016 | A1 |
20160350973 | Shapira et al. | Dec 2016 | A1 |
20160377865 | Alexander et al. | Dec 2016 | A1 |
20160377866 | Alexander et al. | Dec 2016 | A1 |
20170031502 | Rosenberg et al. | Feb 2017 | A1 |
20170035313 | Hong et al. | Feb 2017 | A1 |
20170061817 | Mettler May | Mar 2017 | A1 |
20170068095 | Holland et al. | Mar 2017 | A1 |
20170080346 | Abbas | Mar 2017 | A1 |
20170090604 | Barbier | Mar 2017 | A1 |
20170091567 | Wang et al. | Mar 2017 | A1 |
20170097753 | Bailey et al. | Apr 2017 | A1 |
20170115483 | Aleem et al. | Apr 2017 | A1 |
20170119472 | Herrmann et al. | May 2017 | A1 |
20170123487 | Hazra et al. | May 2017 | A1 |
20170124816 | Yang et al. | May 2017 | A1 |
20170127354 | Garland et al. | May 2017 | A1 |
20170153701 | Mahon et al. | Jun 2017 | A1 |
20170161635 | Oono et al. | Jun 2017 | A1 |
20170188980 | Ash | Jul 2017 | A1 |
20170205876 | Vidal et al. | Jul 2017 | A1 |
20170212290 | Alexander et al. | Jul 2017 | A1 |
20170212349 | Bailey et al. | Jul 2017 | A1 |
20170219829 | Bailey | Aug 2017 | A1 |
20170259167 | Cook et al. | Sep 2017 | A1 |
20170285756 | Wang et al. | Oct 2017 | A1 |
20170285848 | Rosenberg et al. | Oct 2017 | A1 |
20170296363 | Yetkin et al. | Oct 2017 | A1 |
20170299956 | Holland et al. | Oct 2017 | A1 |
20170301630 | Nguyen et al. | Oct 2017 | A1 |
20170308118 | Ito | Oct 2017 | A1 |
20180000367 | Longinotti-Buitoni | Jan 2018 | A1 |
20180020951 | Kaifosh et al. | Jan 2018 | A1 |
20180020978 | Kaifosh et al. | Jan 2018 | A1 |
20180024634 | Kaifosh et al. | Jan 2018 | A1 |
20180024635 | Kaifosh et al. | Jan 2018 | A1 |
20180064363 | Morun et al. | Mar 2018 | A1 |
20180067553 | Morun et al. | Mar 2018 | A1 |
20180088765 | Bailey | Mar 2018 | A1 |
20180095630 | Bailey | Apr 2018 | A1 |
20180101289 | Bailey | Apr 2018 | A1 |
20180120948 | Aleem et al. | May 2018 | A1 |
20180140441 | Poirters | May 2018 | A1 |
20180150033 | Lake et al. | May 2018 | A1 |
20180153430 | Ang et al. | Jun 2018 | A1 |
20180153444 | Yang et al. | Jun 2018 | A1 |
20180154140 | Bouton et al. | Jun 2018 | A1 |
20180301057 | Hargrove et al. | Oct 2018 | A1 |
20180307314 | Connor | Oct 2018 | A1 |
20180321745 | Morun et al. | Nov 2018 | A1 |
20180321746 | Morun et al. | Nov 2018 | A1 |
20180333575 | Bouton | Nov 2018 | A1 |
20180344195 | Morun et al. | Dec 2018 | A1 |
20180360379 | Harrison et al. | Dec 2018 | A1 |
20190025919 | Tadi et al. | Jan 2019 | A1 |
20190033967 | Morun et al. | Jan 2019 | A1 |
20190038166 | Tavabi et al. | Feb 2019 | A1 |
20190076716 | Chiou et al. | Mar 2019 | A1 |
20190121305 | Kaifosh et al. | Apr 2019 | A1 |
20190121306 | Kaifosh et al. | Apr 2019 | A1 |
20190150777 | Guo et al. | May 2019 | A1 |
20190192037 | Morun et al. | Jun 2019 | A1 |
20190212817 | Kaifosh et al. | Jul 2019 | A1 |
20190223748 | Al-natsheh et al. | Jul 2019 | A1 |
20190227627 | Kaifosh et al. | Jul 2019 | A1 |
20190228330 | Kaifosh et al. | Jul 2019 | A1 |
20190228533 | Giurgica-Tiron et al. | Jul 2019 | A1 |
20190228579 | Kaifosh et al. | Jul 2019 | A1 |
20190228590 | Kaifosh et al. | Jul 2019 | A1 |
20190228591 | Giurgica-Tiron et al. | Jul 2019 | A1 |
Number | Date | Country |
---|---|---|
2902045 | Aug 2014 | CA |
2921954 | Feb 2015 | CA |
2939644 | Aug 2015 | CA |
1838933 | Sep 2006 | CN |
102246125 | Nov 2011 | CN |
105190578 | Dec 2015 | CN |
106102504 | Nov 2016 | CN |
44 12 278 | Oct 1995 | DE |
0 301 790 | Feb 1989 | EP |
2009-50679 | Mar 2009 | EP |
2198521 | Jun 2012 | EP |
2733578 | May 2014 | EP |
2959394 | Dec 2015 | EP |
3104737 | Dec 2016 | EP |
61-198892 | Sep 1986 | JP |
H05-277080 | Oct 1993 | JP |
7248873 | Sep 1995 | JP |
2005-095561 | Apr 2005 | JP |
2005-352739 | Dec 2005 | JP |
2008-192004 | Aug 2008 | JP |
2010-520561 | Jun 2010 | JP |
2013-160905 | Aug 2013 | JP |
2016-507851 | Mar 2016 | JP |
2017-509386 | Apr 2017 | JP |
10-2012-0094870 | Aug 2012 | KR |
2012-0094870 | Aug 2012 | KR |
10-2012-0097997 | Sep 2012 | KR |
2012-0097997 | Sep 2012 | KR |
2015-0123254 | Nov 2015 | KR |
2016-0121552 | Oct 2016 | KR |
10-1790147 | Oct 2017 | KR |
WO 2008109248 | Sep 2008 | WO |
WO 2009042313 | Apr 2009 | WO |
WO 2010104879 | Sep 2010 | WO |
WO 2011011750 | Jan 2011 | WO |
2011070554 | Jun 2011 | WO |
WO 2014130871 | Aug 2014 | WO |
WO 2014186370 | Nov 2014 | WO |
WO 2014194257 | Dec 2014 | WO |
WO 2014197443 | Dec 2014 | WO |
WO 2015027089 | Feb 2015 | WO |
WO 2015073713 | May 2015 | WO |
WO 2015081113 | Jun 2015 | WO |
WO 2015123445 | Aug 2015 | WO |
WO 2015123775 | Aug 2015 | WO |
WO 2015199747 | Dec 2015 | WO |
WO 2016041088 | Mar 2016 | WO |
WO 2017062544 | Apr 2017 | WO |
WO 2017092225 | Jun 2017 | WO |
WO 2017120669 | Jul 2017 | WO |
WO 2017172185 | Oct 2017 | WO |
Entry |
---|
Brownlee, “Finite State Machines (FSM): Finite state machines as a control technique in Artificial Intelligence (AI),” Jun. 2002, 12 pages. |
Communication pursuant to Rule 164(1) EPC, dated Sep. 30, 2016, for corresponding EP Application No. 14753949.8, 7 pages. |
Costanza et al., “EMG as a Subtle Input Interface for Mobile Computing,” Mobile HCI 2004, LNCS 3160, edited by S. Brewster and M. Dunlop, Springer-Verlag Berlin Heidelberg, pp. 426-430, 2004. |
Costanza et al., “Toward Subtle Intimate Interfaces for Mobile Devices Using an EMG Controller,” CHI 2005, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 481-489, 2005. |
Ghasemzadeh et al., “A Body Sensor Network With Electromyogram and Inertial Sensors: Multimodal Interpretation of Muscular Activities,” IEEE Transactions on Information Technology in Biomedicine, vol. 14, No. 2, pp. 198-206, Mar. 2010. |
Gourmelon et al., “Contactless sensors for Surface Electromyography,” Proceedings of the 28th IEEE EMBS Annual International Conference, New York City, NY, Aug. 30-Sep. 3, 2006, pp. 2514-2517. |
International Search Report and Written Opinion, dated Aug. 21, 2014, for International Application No. PCT/US2014/037863, 12 pages. |
International Search Report and Written Opinion, dated Feb. 27, 2015, for International Application No. PCT/US2014/067443, 13 pages. |
International Search Report and Written Opinion, dated May 16, 2014, for International Application No. PCT/US2014/017799, 11 pages. |
International Search Report and Written Opinion, dated May 27, 2015, for International Application No. PCT/US2015/015675, 9 pages. |
International Search Report and Written Opinion, dated Nov. 21, 2014, for International Application No. PCT/US2014/052143, 11 pages. |
Janssen, “Radio Frequency (RF)” 2013, retrieved from https://web.archive.org/web/20130726153946/https://www.techopedia.com/definition/5083/radio-frequency-rf, retrieved on Jul. 12, 2017, 2 pages. |
Merriam-Webster, “Radio Frequencies” retrieved from https://www.merriam-webster.com/table/collegiate/radiofre.htm, retrieved on Jul. 12, 2017, 2 pages. |
Morris et al., “Emerging Input Technologies for Always-Available Mobile Interaction,” Foundations and Trends in Human-Computer Interaction 4(4):245-316, 2011. (74 total pages). |
Naik et al., “Real-Time Hand Gesture Identification for Human Computer Interaction Based on ICA of Surface Electromyogram,” IADIS International Conference Interfaces and Human Computer Interaction 2007, 8 pages. |
Picard et al., “Affective Wearables,” Proceedings of the IEEE 1st International Symposium on Wearable Computers, ISWC, Cambridge, MA, USA, Oct. 13-14, 1997, pp. 90-97. |
Rekimoto, “GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices,” ISWC '01 Proceedings of the 5th IEEE International Symposium on Wearable Computers, 2001, 7 pages. |
Saponas et al., “Making Muscle-Computer Interfaces More Practical,” CHI 2010, Atlanta, Georgia, USA, Apr. 10-15, 2010, 4 pages. |
Sato et al., “Touché: Enhancing Touch Interaction on Humans, Screens, Liquids, and Everyday Objects,” CHI' 12, May 5-10, 2012, Austin, Texas. |
Ueno et al., “A Capacitive Sensor System for Measuring Laplacian Electromyogram through Cloth: A Pilot Study,” Proceedings of the 29th Annual International Conference of the IEEE EMBS, Cité Internationale, Lyon, France, Aug. 23-26, 2007, pp. 5731-5734. |
Ueno et al., “Feasibility of Capacitive Sensing of Surface Electromyographic Potential through Cloth,” Sensors and Materials 24(6):335-346, 2012. |
Xiong et al., “A Novel HCI based on EMG and IMU,” Proceedings of the 2011 IEEE International Conference on Robotics and Biomimetics, Phuket, Thailand, Dec. 7-11, 2011, 5 pages. |
Xu et al., “Hand Gesture Recognition and Virtual Game Control Based on 3D Accelerometer and EMG Sensors,” Proceedings of the 14th international conference on Intelligent user interfaces, Sanibel Island, Florida, Feb. 8-11, 2009, pp. 401-406. |
Zhang et al., “A Framework for Hand Gesture Recognition Based on Accelerometer and EMG Sensors,” IEEE Transactions on Systems, Man, and Cybernetics—Part A: Systems and Humans, vol. 41, No. 6, pp. 1064-1076, Nov. 2011. |
International Search Report and Written Opinion for International Application No. PCT/US2014/057029 dated Feb. 24, 2015. |
International Search Report and Written Opinion for International Application No. PCT/US2016/018293 dated Jun. 8, 2016. |
International Search Report and Written Opinion for International Application No. PCT/US2016/018298 dated Jun. 8, 2016. |
International Search Report and Written Opinion for International Application No. PCT/US2016/018299 dated Jun. 8, 2016. |
International Search Report and Written Opinion for International Application No. PCT/US2016/067246 dated Apr. 25, 2017. |
International Preliminary Report on Patentability for International Application No. PCT/US2017/043686 dated Feb. 7, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2017/043686 dated Oct. 6, 2017. |
International Preliminary Report on Patentability for International Application No. PCT/US2017/043693 dated Feb. 7, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2017/043693 dated Oct. 6, 2017. |
International Preliminary Report on Patentability for International Application No. PCT/US2017/043791 dated Feb. 7, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2017/043791 dated Oct. 5, 2017. |
International Preliminary Report on Patentability for International Application No. PCT/US2017/043792 dated Feb. 7, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2017/043792 dated Oct. 5, 2017. |
International Search Report and Written Opinion for International Application No. PCT/US2018/056768 dated Jan. 15, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2018/061409 dated Mar. 12, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2018/063215 dated Mar. 21, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015134 dated May 15, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015167 dated May 21, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015174 dated May 21, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015238 dated May 16, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015183 dated May 3, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015180 dated May 28, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/015244 dated May 16, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/037302 dated Oct. 11, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/028299 dated Aug. 9, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US2019/034173 dated Sep. 18, 2019. |
Invitation to Pay Additional Fees for International Application No. PCT/US2019/031114 dated Aug. 6, 2019. |
Invitation to Pay Additional Fees for International Application No. PCT/US2019/049094 dated Oct. 24, 2019. |
International Search Report and Written Opinion for International Application No. PCT/US19/20065 dated May 16, 2019. |
[No Author Listed], IEEE 100 The Authoritative Dictionary of IEEE Standards Terms, Dec. 2000, The Institute of Electrical and Electronics Engineering, Inc. 3 Park Avenue, New York, NY, 10016-5997, USA, p. 1004. 3 pages. |
Amitai, P-27: A Two-Dimensional Aperture Expander for Ultra-Compact, High-Performance Head-Worn Displays. SID Symposium Digest of Technical Papers. 2005;36(1):360-363. |
Arkenbout et al., Robust Hand Motion Tracking through Data Fusion of 5DT Data Glove and Nimble VR Kinect Camera Measurements. Sensors. 2015;15:31644-71. |
Ayras et al., Exit pupil expander with a large field of view based on diffractive optics. Journal of the SID. 2009;17(8):659-664. |
Benko et al., Enhancing Input On and Above the Interactive Surface with Muscle Sensing. The ACM International Conference on Interactive Tabletops and Surfaces. ITS '09. 2009:93-100. |
Boyali et al., Spectral Collaborative Representation based Classification for hand gestures recognition on electromyography signals. Biomedical Signal Processing and Control. 2016;24:11-18. |
Brownlee, Finite State Machines as a Control Technique in Artificial Intelligence. Jun. 2002. 12 pages. |
Chellappan et al., Laser-based displays: a review. Applied Optics. 2010;49(25):F79-F98. |
Cheng et al., A Novel Phonology- and Radical-Coded Chinese Sign Language Recognition Framework Using Accelerometer and Surface Electromyography Sensors. Sensors. 2015;15:23303-24. |
Csapo et al., Evaluation of Human-Myo Gesture Control Capabilities in Continuous Search and Select Operations. 7th IEEE International Conference on Cognitive Infocommunications. 2016;000415-20. |
Cui et al., Diffraction from angular multiplexing slanted volume hologram gratings. Optik 2005;116:118-122. |
Curatu et al., Dual Purpose Lens for an Eye-Tracked Projection Head-Mounted Display. International Optical Design Conference 2006, SPIE-OSA 2006;6342:63420X-1-63420X-7. 7 pages. |
Curatu et al., Projection-based head-mounted display with eye-tracking capabilities. Proceedings of SPIE. 2005;5875:58750J-1-58750J-9. 9 pages. |
Davoodi et al., Development of a Physics-Based Target Shooting Game to Train Amputee Users of Multijoint Upper Limb Prostheses. Presence. Massachusetts Institute of Technology. 2012;21(1):85-95. |
Delis et al., Development of a Myoelectric Controller Based on Knee Angle Estimation. Biodevices 2009. International Conference on Biomedical Electronics and Devices. Jan. 17, 2009. 7 pages. |
Diener et al., Direct conversion from facial myoelectric signals to speech using Deep Neural Networks. 2015 International Joint Conference on Neural Networks (IJCNN). Oct. 1, 2015. 7 pages. |
Ding et al., HMM with improved feature extraction-based feature parameters for identity recognition of gesture command operators by using a sensed Kinect-data stream. Neurocomputing. 2017;262:108-19. |
Essex, Tutorial on Optomechanical Beam Steering Mechanisms. OPTI 521 Tutorial, College of Optical Sciences, University of Arizona, 8 pages, 2006. |
Farina et al., Man/machine interface based on the discharge timings of spinal motor neurons after targeted muscle reinnervation. Nature. Biomedical Engineering. 2017;1:1-12. |
Favorskaya et al., Localization and Recognition of Dynamic Hand Gestures Based on Hierarchy of Manifold Classifiers. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences. 2015;XL-5/W6:1-8. |
Fernandez et al., Optimization of a thick polyvinyl alcohol-acrylamide photopolymer for data storage using a combination of angular and peristrophic holographic multiplexing. Applied Optics 2009;45(29):7661-7666. |
Gallina et al., Surface EMG Biofeedback. Surface Electromyography: Physiology, Engineering, and Applications. 2016:485-500. |
Gopura et al., A Human Forearm and wrist motion assist exoskeleton robot with EMG-based fuzzy-neuro control. Proceedings of the 2nd IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics. Oct. 19-22, 2008. 6 pages. |
Hainich et al., Chapter 10: Near-Eye Displays. Displays: Fundamentals & Applications, AK Peters/CRC Press. 2011. 65 pages. |
Hauschild et al., A Virtual Reality Environment for Designing and Fitting Neural Prosthetic Limbs. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2007;15(1):9-15. |
Hornstein et al., Maradin's Micro-Mirror—System Level Synchronization Notes. SID 2012 Digest, pp. 981-984. |
Itoh et al., Interaction-Free Calibration for Optical See-Through Head-Mounted Displays based on 3D Eve Localization. 2014 IEEE Symposium on 3D User Interfaces (3DUI), pp. 75-82, 2014. |
Janssen, What is Radio Frequency (RF)? Technopedia, 2013. 2 pages. URL:https://web.archive.org/web/20130726153946/https://www.technopedia.com/definition/5083/radio- frequency-rf [last accessed on Jul. 12, 2017]. |
Jiang, Purdue University Graduate School Thesis/Dissertation Acceptance. Graduate School Form 30. Updated Jan. 15, 2015. 24 pages. |
Kawaguchi et al., Estimation of Finger Joint Angles Based on Electromechanical Sensing of Wrist Shape. IEEE Transactions on Neural Systems and Rehabilitation Engineering. 2017;25(9):1409-18. |
Kessler, Optics of Near to Eye Displays (NEDs). Presentation—Oasis 2013, Tel Aviv, Feb. 19, 2013, 37 pages. |
Kim et al., Real-Time Human Pose Estimation and Gesture Recognition from Depth Images Using Superpixels and SVM Classifier. Sensors. 2015;15:12410-27. |
Koerner, Design and Characterization of the Exo-Skin Haptic Device: A Novel Tendon Actuated Textile Hand Exoskeleton. 2017. 5 pages. |
Kress et al., A review of head-mounted displays (HMD) technologies and applications for consumer electronics. Proc. of SPIE. 2013;8720:87200A-1-87200A-13. 13 pages. |
Kress et al., Diffractive and Holographic Optics as Optical Combiners in Head Mounted Displays. Proceedings of the 2013 ACM Conference on Pervasive and Ubiquitous Computing Adjunct Publication, pp. 1479-1482, 2013. |
Kress, Optical architectures for see-through wearable displays. Presentation—Bay Area—SID Seminar, Apr. 30, 2014, 156 pages. |
Lee et al., Motion and Force Estimation System of Human Fingers. Journal of Institute of Control, Robotics and Systems. 2011;17(10):1014-1020. |
Levola, 7.1: Invited Paper: Novel Diffractive Optical Components for Near to Eye Displays. SID Symposium Digest of Technical Papers 2006;37(1):64-67. |
Li et al., Motor Function Evaluation of Hemiplegic Upper-Extremities Using Data Fusion from Wearable Inertial and Surface EMG Sensors. Sensors. MDPI. 2017;17(582):1-17. |
Liao et al., The Evolution of MEMS Displays. IEEE Transactions on Industrial Electronics 2009;56(4):1057-1065. |
Lippert, Chapter 6: Display Devices: RSD™ (Retinal Scanning Display). The Avionics Handbook, CRC Press, 2001, 8 pages. |
Lopes et al., Hand/arm gesture segmentation by motion using IMU and EMG sensing. ScienceDirect. Elsevier. Procedia Manufacturing. 2017;11:107-13. |
Majaranta et al., Chapter 3 Eye-Tracking and Eye-Based Human-Computer Interaction. Advances in Physiological Computing, Springer-Verlag London, 2014, pp. 39-65. |
Martin et al., A Novel Approach of Prosthetic Arm Control using Computer Vision, Biosignals, and Motion Capture. IEEE. 2014. 5 pages. |
Mcintee, A Task Model of Free-Space Movement-Based Gestures. Dissertation. Graduate Faculty of North Carolina State University. Computer Science. 2016. 129 pages. |
Mendes et al., Sensor Fusion and Smart Sensor in Sports and Biomedical Applications. Sensors. 2016;16(1569):1-31. |
Mohamed, Homogeneous cognitive based biometrics for static authentication. Dissertation submitted to University of Victoria, Canada. 2010. 149 pages. URL:http://hdl.handle.net/1828/3211 [last accessed Oct. 11, 2019]. |
Naik et al., Source Separation and Identification issues in bio signals: A solution using Blind source separation. Intech. 2009. 23 pages. |
Naik et al., Subtle Hand gesture identification for HCI using Temporal Decorrelation Source Separation BSS of surface EMG. Digital Image Computing Techniques and Applications. IEEE Computer Society. 2007;30-7. |
Negro et al., Multi-channel intramuscular and surface EMG decomposition by convolutive blind source separation. Journal of Neural Engineering. 2016;13:1-17. |
Saponas et al., Demonstrating the Feasibility of Using Forearm Electromyography for Muscle-Computer Interfaces. CHI 2008 Proceedings. Physiological Sensing for Input. 2008:515-24. |
Saponas et al., Enabling Always-Available Input with Muscle-Computer Interfaces. UIST '09. 2009:167-76. |
Saponas et al., Making Muscle-Computer Interfaces More Practical. CHI 2010: Brauns and Brawn. 2010:851-4. |
Sartori et al., Neural Data-Driven Musculoskeletal Modeling for Personalized Neurorehabilitation Technologies. IEEE Transactions on Biomedical Engineering. 2016;63(5):879-93. |
Sauras-Perez et al., A Voice and Pointing Gesture Interaction System for Supporting Human Spontaneous Decisions in Autonomous Cars. Clemson University. All Dissertations. 2017. 174 pages. |
Schowengerdt et al., Stereoscopic retinal scanning laser display with integrated focus cues for ocular accommodation. Proc. of SPIE-IS&T Electronic Imaging 2004;5291:366-376. |
Shen et al., I am a Smartwatch and I can Track my User's Arm. University of Illinois at Urbana-Champaign. MobiSys' 16. 12 pages. |
Silverman et al., 58.5L: Late-News Paper: Engineering a Retinal Scanning Laser Display with Integrated Accommodative Depth Cues. SID 03 Digest, pp. 1538-1541, 2003. |
Son et al., Evaluating the utility of two gestural discomfort evaluation methods. PLOS One. 2017. 21 pages. |
Strbac et al., Microsoft Kinect-Based Artificial Perception System for Control of Functional Electrical Stimulation Assisted Grasping. Hindawi Publishing Corporation. BioMed Research International. 2014. 13 pages. |
Takatsuka et al., Retinal projection display using diffractive optical element. Tenth International Conference on Intelligent Information Hiding and Multimedia Signal Processing, IEEE, 2014, pp. 403-406. |
Torres, Myo Gesture Control Armband. PCMag. Https://www.pcmag.com/article2/0,2817,2485462,00.asp 2015. 9 pages. |
Urey et al., Optical performance requirements for MEMS-scanner based microdisplays. Conf. on MOEMS and Miniaturized Systems. SPIE. 2000;4178:176-185. |
Urey, Diffractive exit-pupil expander for display applications. Applied Optics 2001;40(32):5840-5851. |
Valero-Cuevas et al., Computational Models for Neuromuscular Function. NIH Public Access Author Manuscript. Jun. 16, 2011. 52 pages. |
Viirre et al., The Virtual Retinal Display: A New Technology for Virtual Reality and Augmented Vision in Medicine. Proceedings of Medicine Meets Virtual Reality, IOS Press and Ohmsha, 1998, pp. 252-257. 6 pages. |
Wodzinski et al., Sequential Classification of Palm Gestures Based on A* Algorithm and MLP Neural Network for Quadrocopter Control. Metrol. Meas. Syst., 2017;24(2):265-76. |
Xue et al., Multiple Sensors Based Hand Motion Recognition Using Adaptive Directed Acyclic Graph. Applied Sciences. MDPI. 2017;7(358):1-14. |
Yang et al., Surface EMG based handgrip force predictions using gene expression programming. Neurocomputing. 2016;207:568-579. |
Number | Date | Country | |
---|---|---|---|
20180120948 A1 | May 2018 | US |
Number | Date | Country | |
---|---|---|---|
62014605 | Jun 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14737081 | Jun 2015 | US |
Child | 15852196 | US |