The application relates to user interactions in networks, in particular, to system and methods for reproducing human motions via networks.
Conventionally, users communicate with each other by using multimedia information such as characters, images, voice, etc., via a network. However, body motions of a user at one end of the network cannot be transmitted to a user at the other end of the network by present communication platforms.
In a first aspect of the present application, a system for reproducing a body motion via a network is disclosed. The system comprises: a sensor configured to capture a surface electromyography signal generated by a body motion of a user at a first terminal in the network; a processor configured to receive the signal from the sensor and identify the body motion based on the received signal, and send information associated with the body motion to a second terminal in the network, the processor being located in the first terminal; and a mechanical member configured to receive the information associated with the body motion from the second terminal and reproduce the body motion based on the received information.
In a second aspect of the present application, a method for reproducing a body motion via a network is disclosed. The method comprises: capturing a surface electromyography signal generated by a body motion of a user at a first terminal in a network; identifying the body motion based on the signal at the first terminal; sending information associated with the body motion to a second terminal in the network; and reproducing the body motion at the second terminal based on the information.
In a third aspect of the present application, a system for identifying a body motion is disclosed. The system comprises: a sensor configured to capture a surface electromyography signal generated by a body motion; and an identifying module comprising a feature retracting unit and a motion classifying unit. The feature retracting unit is configured to retract a feature signal from the signal received from the sensor. The motion classifying unit configured to identify a body motion corresponding to the extracted feature signal from a plurality of pre-stored motion samples.
In a fourth aspect of the present application, a device for identifying a body motion is disclosed. The device comprises: a feature retracting unit configured to retract a feature signal from a surface electromyography signal generated by a body motion; and a motion classifying unit configured to identify a body motion corresponding to the extracted feature signal from a plurality of pre-stored motion samples.
In a fifth aspect of the present application, a method for identifying a body motion is disclosed. The method comprises: retracting a feature signal from a surface electromyography signal generated by a body motion; and identifying a body motion corresponding to the extracted feature signal from a plurality of pre-stored motion samples.
The patent or application file contains at least one drawing executed in color. Copies of this patent or patent application publication with color drawings will be provided by the Office upon request and payment of the necessary fee.
a-4c show spectral moment features of six-channel signals for three different kinds of hand movement according to an embodiment of the present application.
a-6b show sEMG signals with small and large forces, respectively, according to an embodiment of the present application.
a-8b show STFT results with low and fast speeds according to an embodiment of the present application.
Hereinafter, embodiments according to the present application are described in detail with reference to accompanying drawings for an illustration purpose.
Referring to
As shown, the terminals 11 and 12 are operated by the users 10 and 20, respectively, and communicated with each other via the network. The terminals may be any suitable communication terminal in the network, such as a PC, a laptop computer, a cell phone, a PDA, etc.
The sEMG sensor 12 may be worn by the user 10 on his/her skin surface. For example, the sEMG sensor 12 may be worn around the user's wrist. Once a body portion of the user corresponding to the sEMG sensor 12 moves, his/her muscles at the corresponding place contract accordingly so that minor electrical signal is generated there. In response to the user's body movement, such as a hand movement, an electrical signal, i.e., an electromyography signal, caused by the movement of the muscle covered by the sEMG sensor 12 worn on the skin surface is detected by the sEMG sensor 12. The detected electromyography signal may be transmitted from the sEMG sensor 12 to the terminal 11 via a wire or wireless interface.
According to an embodiment, a wireless module is used to transmit the electromyography signal. The wire module may comprise a transmitter arranged at the sEMG sensor 12 and a receiver coupled to the terminal 11. A converter may be provided in the transmitter for converting the received analog sEMG signal into a digital signal so that the digital signal is transmitted to the receiver wirelessly. For example, the transmitter may be implemented by a CC2500 2.4 GHz wireless module and the converter may be implemented by an msp430 converter.
In an embodiment according to the present application, the sEMG sensor may be a wrist sEMG sensing ring worn around a wrist of the user. In this case, referring to
Since the detected sEMG signal has low amplitudes ranging from fractions of a μV to several hundred μV, an amplifying unit 202 such as a differential amplifier is provided in the sEMG sensor to amplify the signal to the scale of several V, as shown in
In an embodiment according to the present application, the processor in the terminal 11 may comprise an identifying module configured to identify the body motion based on the signal received from the sEMG sensor. An illustrated block scheme of the identifying module 30 is shown in
As shown, the identifying module 30 may comprise a feature extracting unit 301 configured to extract a feature signal from the signal received from the sEMG sensor; and a motion classifying unit 302 configured to identify a body motion corresponding to the extracted feature signal from a plurality of pre-stored motion samples. Optionally, the identifying module 30 may further comprise a dimensionality reducing unit 303 configured to reduce a dimensionality of the extracted feature signal. In this case, the motion classifying unit 302 is configured to identify the body motion based on the signal with a reduced dimensionality. Optionally, the identifying module 30 may further comprise a training unit 304 configured to learn different body motions in advance so as to obtain the plurality of pre-stored motion samples.
In an embodiment, a storing unit (not shown) may be provided in the identifying module 30 to store the plurality of pre-stored motion samples. Alternatively, the storing unit may be provided outside the identifying module 30. Alternatively, the plurality of pre-stored motion samples may be stored in other storage in the terminal 11 or stored in a distributed network.
After the body motion is identified by the identifying module, information associated with the identified body motion is sent to the second terminal via the network. Then, the information is transmitted to the mechanical member in a wire or wireless manner so that the mechanical member reproduces the body motion to a user at the second terminal.
Hereinafter, the processing performed by components of the identifying unit according to the present application will be described in detail with reference to illustrative embodiments.
To identify the body movements, in particular, the hand movements, features distinguishing different types of movements are extracted from the sEMG signal by the feature extracting unit 301. The extracting may be implemented by a temporal method, a spectral method, or a temporal-spectral method. In the temporal method, a square integral feature may be obtained. In the spectral method, a moment and/or a square integral feature obtained based on a Fourier transform. In the temporal-spectral method, a moment and/or a square integral feature may be obtained based on the short-time Fourier transform (STFT).
a-4c illustratively show spectral moment features extracted from six-channel signals obtained from sEMG sensor in response to three types of hand movement, respectively. As shown, different spectral moment features are obtained according to different hand movements. These spectral moment features may be directly used for the further motion classification. However, the values of the spectral moment features may vary with amplitude of a force applied by the user. But the spectral moment feature ratios of the different channels for the same movements keep stable, regardless of exerted forces. Thus, differences among the six-channel signals may be used for distinguishing different hand movements. For example, ratios of the spectral moment features obtained from the six-channel signals may be used in the following classification for distinguishing different hand movements.
According to the extracted features, the motion classifying unit may identify the corresponding body motion from a plurality of pre-stored motion samples based on the extracted features and then information associated with the body motion may be sent to the second terminal to instruct the mechanical member to reproduce the corresponding body motion. The plurality of pre-stored motion samples may be stored in a storage unit inside or outside the identifying module. Alternatively, the plurality of pre-stored motion samples may be stored in a distributed network. Each of the plurality of pre-stored motion samples may comprise information associated with a body motion and its corresponding features. Each body motion may be indicated by the information associated therewith.
As stated above, in embodiments with the six-channel sEMG sensor, each body movement is characterized by six features extracted from six sEMG electric signals. In an embodiment, a principal component analysis (PCA) may be employed to reduce a dimension size of the features so as to reduce the amount of computation to be performed. Using the PCA, the features extracted from the six-channel sEMG signals are projected on a 2-dimensional plane as shown in
In this case, each of the plurality of pre-stored motion samples may comprise information associated with a body motion and its corresponding features with a reduced dimensionality. The motion classifying unit may identify the corresponding body motion based on the extracted features with the reduced dimensionality.
According to an embodiment, a training unit may be provided. In this case, the pre-stored samples may be obtained by the training unit learning different body motions. For example, a training method based on the Mahalanobis distance may be used. This method is a statistical method. In this method, 2-dimensional projected features obtained by PCA are clustered using the Mahalanobis distance. As shown in
Thus, to recognize a new hand movement represented by # in
Besides the type of a hand movement, an exerted force and a speed of the movement are also important. The force and the speed impact the obtained signal and thus the extracted feature. According to an embodiment, in the training and identification, amplitude of the exerted force and/or the speed of the movement may be considered.
The temporal, spectral and temporal-spectral methods may also be used to distinguish different force levels. For example,
Similarly, the speed information is hidden in transient features, which can be extracted by a STFT method. The basic idea of the STFT method is to divide the signal into short segments in the time domain and then apply the Fourier transform to each segment. Compared with the traditional FFT, the STFT method is able to capture more features of the transient movements and make it possible to identify a velocity of the movement. Based on a result of the STFT, a Flatness feature may be determined to describe an energy distribution in both time and frequency domains and to distinguish different movement speeds. Accordingly, when samples are collected in the training process, the relationship of obtained features and speeds is considered. Then, when the identification is performed based on extracted features, the corresponding speed can also be determined. An example is shown in
When the body motion is identified, information associated with the body motion is sent to the second terminal via the network. Then, the second terminal transmits the information to the mechanical member coupled thereto. The information may be transmitted from the second terminal to the mechanical member via a wire or wireless communication. According to an embodiment, the information is transmitted via a wireless module similar to that used to communicate between the sEMG sensor and the first terminal as discussed above. That is, a transmitter is arranged to be coupled to the second terminal and a receiver is arranged to be coupled to the mechanical member.
According to an embodiment, the mechanical member is a robot hand with a palm, five fingers and a driving unit for the five fingers. Each finger is driven by a coupled mechanism. Similar to a human hand, as shown in
According to an embodiment, the driving unit may comprise four servo motors, one of which drives a ring finger and a little finger, and the other three of which drive a thumb, an index finger and a middle finger, respectively. Based on the received information, the robot hand instructs the driving unit to rotary one or more of the five fingers.
According to an embodiment, the tension part 260 comprises two wires connected to one servo motor. As shown in
The mechanical member may further comprise an arm, in which servo motors may be placed. At the bottom of the arm, a battery case may be placed for holding the batteries as well as the circuit block for receiving signals. In addition, the mechanical member may be coated for better sense of touch.
Referring to
According to an embodiment, the identifying step S1202 may comprises sub-steps of retracting a feature signal from the captured signal; and identifying a body motion corresponding to the extracted feature signal from a plurality of pre-stored motion samples. The plurality of pre-stored motion samples may be obtained by learning different body motions in advance.
According to an embodiment, the method may further comprise a step of reducing a dimensionality of the extracted feature signal before the step of identifying, wherein the extracted features signal may be a square integral or a moment obtained from the sEMG signal.
Since the steps have been discussed hereinabove with reference to the system 100, detailed descriptions thereof are omitted.
Hereinabove, illustrative embodiments according to the present application are described with reference to the accompany drawings. However, as obvious for those skilled in the art, it is not necessary to contain all elements mentioned above in one solution. Any suitable combination of the described elements may be combined to implement the present application.
U.S. provisional patent application Ser. No. 61/235,986 filed Aug. 21, 2009, and Chinese patent application Serial No. 201010146127.6 filed Apr. 12, 2010 are each incorporated herein by reference in their entirety.
Number | Date | Country | Kind |
---|---|---|---|
2010 1 0146127 | Apr 2010 | CN | national |
The present application claims the benefits of U.S. provisional application 61/235,986 filed on Aug. 21, 2009, which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3831296 | Hagle | Aug 1974 | A |
4314379 | Tanie et al. | Feb 1982 | A |
4408192 | Ward et al. | Oct 1983 | A |
4623354 | Childress et al. | Nov 1986 | A |
5252102 | Singer et al. | Oct 1993 | A |
5376128 | Bozeman, Jr. | Dec 1994 | A |
5413611 | Haslam et al. | May 1995 | A |
5679004 | McGowan et al. | Oct 1997 | A |
6171239 | Humphrey | Jan 2001 | B1 |
6740123 | Davalli et al. | May 2004 | B2 |
6785574 | Kajitani et al. | Aug 2004 | B2 |
6893407 | Brooks et al. | May 2005 | B1 |
6952687 | Andersen et al. | Oct 2005 | B2 |
7008231 | Pesnell et al. | Mar 2006 | B2 |
7186270 | Elkins | Mar 2007 | B2 |
8122772 | Clausen et al. | Feb 2012 | B2 |
8126542 | Grey | Feb 2012 | B2 |
8170656 | Tan et al. | May 2012 | B2 |
8323354 | Bedard et al. | Dec 2012 | B2 |
20010014441 | Hill et al. | Aug 2001 | A1 |
20020143405 | Davalli et al. | Oct 2002 | A1 |
20030107608 | Hong et al. | Jun 2003 | A1 |
20050090756 | Wolf et al. | Apr 2005 | A1 |
20050277844 | Strother et al. | Dec 2005 | A1 |
20060015470 | Lauer et al. | Jan 2006 | A1 |
20060071934 | Sagar et al. | Apr 2006 | A1 |
20070140562 | Linderman | Jun 2007 | A1 |
20070164985 | Jeong et al. | Jul 2007 | A1 |
20070191743 | McBean et al. | Aug 2007 | A1 |
20080009771 | Perry et al. | Jan 2008 | A1 |
20080058668 | Seyed Momen et al. | Mar 2008 | A1 |
20080253695 | Sano et al. | Oct 2008 | A1 |
20090326406 | Tan et al. | Dec 2009 | A1 |
20090327171 | Tan et al. | Dec 2009 | A1 |
20100016752 | Sieracki | Jan 2010 | A1 |
20100145219 | Grey | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
101286196 | Oct 2008 | CN |
Entry |
---|
International Search Report for International Application No. PCT/CN2010/076233, mailed Dec. 16, 2010, 4 pages. |
Written Opinion for International Application No. PCT/CN2010/076233, mailed Dec. 16, 2010, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20110071417 A1 | Mar 2011 | US |
Number | Date | Country | |
---|---|---|---|
61235986 | Aug 2009 | US |