Apparatus, system and method for interpreting and reproducing physical motion

Abstract
An apparatus, system and method for turning physical motion into an interpretable language which when formed into sentences reproduces the original motion. This system may be referred to herein as a “Motion Description System.” Physical motion is defined as motion in one, two or three dimensions, with anywhere from 1 to 6 degrees of freedom. Language is defined as meaning applied to an abstraction.
Description
FIELD OF THE INVENTION

The present invention is directed to the field of analyzing motion and more specifically to an apparatus, system and method for interpreting and reproducing physical motion.


BACKGROUND OF THE INVENTION

Motion sensing devices and systems, including utilization in virtual reality devices, are known in the art, see U.S. Pat. App. Pub. No. 2003/0024311 to Perkins, U.S. Pat. App. Pub. No. 2002/0123386 to Perlmutter, U.S. Pat. No. 5,819,206 to Horton, et al; U.S. Pat. No. 5,898,421 to Quinn; U.S. Pat. No. 5,694,340 to Kim; and U.S. Pat. No. RE37,374 to Roston, et al., which are all incorporated herein by reference.


Accordingly, there is a need for an apparatus, system and method that can facilitate the interpretation and reproduction of sensed physical motion.


SUMMARY OF THE INVENTION

An apparatus, system and method for turning physical motion into an interpretable language which when formed into sentences represents the original motion. This system may be referred to herein as a “Motion Description System.” Physical motion is defined as motion in one, two or three dimensions, with anywhere from 1 to 6 degrees of freedom. Language is defined as meaning applied to an abstraction.





BRIEF DESCRIPTION OF THE DRAWING

The invention is described with reference to the FIGURE of the drawing, in which:


The FIGURE is a schematic illustration of a system used to turn physical motion into an interpretable language, according to various embodiments of the invention.





DETAILED DESCRIPTION OF VARIOUS EMBODIMENTS OF THE INVENTION

It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention, as claimed. It may be noted that, as used in the specification and the appended claims, the singular forms “a”, “an” and “the” include plural referents unless the context clearly dictates otherwise. References cited herein are hereby incorporated by reference in their entirety, except to the extent that they conflict with teachings explicitly set forth in this specification.


Referring now to the FIGURE of the drawing, the FIGURE constitutes a part of this specification and illustrates exemplary embodiments of the invention. It is to be understood that in some instances various aspects of the invention may be shown schematically or may be exaggerated to facilitate an understanding of the invention.


The FIGURE is a schematic illustration of a system 1000 used to turn physical motion into an interpretable language, according to various embodiments of the present invention. When formed into sentences the interpretable language may be used to abstractly replace the original physical motion. Embodiments of system components are described below.


Motion Sensing


In one embodiment, a motion sensing unit 10 is described as follows. Physical motion is captured using a motion capture device 12 such as, but not limited to, one or more of the following: accelerometer, gyroscope, RF tag, magnetic sensor, compass, global positioning unit, fiber-optic interferometers, piezo sensors, strain gauges, cameras, etc. Data is received from the motion capture device 12 and transferred to the motion interpretation unit 100, for example via a data reception and transmission device 14. As shown by the multiple embodiments illustrated, the motion data may then be transferred directly to the motion interpretation unit 100 or may be transferred via an external application 20, such as a program that utilizes the raw motion data as well as the commands received from the motion interpretation unit 100 (described below). Data transfer may be accomplished by direct electrical connection, by wireless data transmission or by other data transfer mechanisms as known to those of ordinary skill in the art.


Motion Interpretation


In one embodiment, a motion interpretation unit 100 contains the following components:


Data Processor 110


Raw motions are periodically sampled from the one or more physical motion capture devices 12 of the motion sensing unit 10.


Raw non-motion data is periodically sampled and input from a non-motion data device 112 (i.e. keyboard, voice, mouse, etc.).


A single sample of Complex Motion data is preliminarily processed. The Complex Motion data is defined as the combined sample of all raw physical motion captured by the motion capture device(s) and all non-motion data as defined above.


All the Single Degree Motion (SDM) components are identified from the Complex Motion data. The Single Degree Motion components are defined as the expression of a multi-dimensional motion in terms of single dimension vectors in a given reference frame.


Token Identifier (TI) or Tokenizer 120


The tokenizer 120 receives as input a stream of Single Degree Motion component samples.


Every time subsequent subset of samples is marked as a possible token.


A token dictionary 122 exists. The token dictionary is defined as a list of simple meanings given to SDM components. The token dictionary is editable.


Sample groups marked for tokenization are compared against the token dictionary 122 and are either discarded (as bad syntax) or given token status.


Parser 130


The parser 130 receives as input a stream of tokenized 3D Complex Motion/Non-Motion data.


Using a language specification 132, the tokens are grouped into sentences. In one embodiment, the system contains a default language specification.


Command Generator 140


The command generator 140 receives as input a sentence and outputs commands based on sentences and non-motion related inputs.


At any time a user may create or teach the system new language (i.e. tokens, sentences) by associating a raw motion with an output command. Output commands can include, but are not limited to, application context specific actions, keystrokes, mouse movements. In one embodiment, the output command is sent to the external application 20.


Languages may be context driven and created for any specific application.


In the one embodiment, for example golf, motions of the club may be interpreted too mean “good swing,” “fade to the right,” etc.


EXAMPLE APPLICATIONS

The Motion Description System is suitable for a number of applications:

  • Sports—Allowing a user to describe Complex Motion in terms of user-understandable language. For example: Golf. The system provides for the ability to allow a user to identify only an “in-to-out, open-faced, 43 mph” swing. Other sports could include, but are not limited to, Baseball, Football, Soccer, Hockey, Tennis, Racquetball, Squash, etc.
  • Sign Language to Spoken Language Translation—The Motion Description System can translate the signing motions into written or spoken language.
  • Military Signing—The Motion Description System can allow the military to translate silent communications via gestures, securely, to written or spoken language.
  • Musical Applications—The Motion Description System can allow time syncing a Conductor's baton to various metronomic devices via MIDI or other synchronization protocols.
  • 3D Virtual Reality Control and Video Game Interaction—The Motion Description System allows for game developers to use human-understandable motion terms (e.g. “Run,”“Jog,” “Jab”) during development. These terms can then be interpreted by the Motion Description System to generate and map appropriate motions to screen/world activity.
  • Computer Control—The Motion Description System can allow for computer users to control their environment through the use of simple gestures.


Other embodiments of the invention will be apparent to those skilled in the art from a consideration of the specification or practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with the true scope and spirit of the invention being indicated by the following claims.

Claims
  • 1. A method for processing captured multi-dimensional motion data, the method comprising: receiving, by a motion interpretation unit, motion data generated by at least one motion capture device, the motion data representative of captured multi-dimensional motion;processing, by the motion interpretation unit, the received motion data to form a stream of motion component samples;receiving, by a motion interpretation unit, non-motion data;processing, by the motion interpretation unit, the received non-motion data to form non-motion samples;tokenizing, by the motion interpretation unit, the motion component samples into one or more tokens, the tokens representative of captured motion; andgenerating, by the motion interpretation unit, one or more output commands, the one or more output commands derived from one or more processed tokens, non-motion samples, or a combination thereof.
  • 2. A method according to claim 1, wherein the at least one motion capture device is a device selected from the group consisting of: an accelerometer, a gyroscope, an RF tag, a magnetic sensor, a compass, a global positioning unit, a fiber-optic interferometer, a piezo sensor, a strain gauge, a camera, and any combination thereof.
  • 3. A method according to claim 1, wherein at least one motion capture device is adapted to be held and moved by a user and generate motion data in response to movements.
  • 4. A method according to claim 1, wherein the step of receiving further comprises: receiving, by a data reception and transmission device, motion data generated by a motion capture device, the motion data representative of captured multi-dimensional motion; andtransferring, by the data reception and transmission device, the received motion data to the motion interpretation unit.
  • 5. A method according to claim 4, wherein the data reception and transmission device receives and transmits data wirelessly.
  • 6. A method according to claim 1, wherein the at least one motion capture device transmits and receives data wirelessly.
  • 7. A method according to claim 1, wherein the step of tokenizing further comprises: comparing, by the motion interpretation unit, the motion component samples against entries in an editable token dictionary, the token dictionary comprising data stored in memory; andmarking, by the motion interpretation unit, a motion component sample as valid data tokens representative of captured motion based upon the comparison, ordiscarding, by the motion interpretation unit, a motion component sample based upon the comparison.
  • 8. A method according to claim 1, wherein the one or more output commands are adapted to operate an application external to the motion interpretation unit.
  • 9. A method according to claim 1, further comprising: processing, by the motion interpretation unit, the one or more tokens to form sentences, the sentences comprising groups of tokens.
  • 10. A method according to claim 9, further comprising: combining, by the motion interpretation unit, one or more sentences with optionally one or more non-motion samples; andgenerating, by the motion interpretation unit, at least one output command associated with the combined one or more sentences and optional one or more non-motion samples, the output command adapted to operate an application external to the motion interpretation unit.
  • 11. A system for processing captured multi-dimensional motion data, the system comprising: a motion interpretation unit for receiving non-motion data and motion data generated by at least one motion capture device, the motion data representative of captured multi-dimensional motion;a data processor for processing the received motion data to form a stream of motion component samples, and for processing the received non-motion data to form non-motion samples; anda tokenizer for tokenizing the motion component samples into one or more tokens, the tokens representative of captured motion, whereinthe motion interpretation unit generates one or more output commands, the one or more output commands derived from one or more processed tokens, non-motion samples, or a combination thereof.
  • 12. The system of claim 11, wherein the at least one motion capture device is a device selected from the group consisting of: an accelerometer, a gyroscope, an RF tag, a magnetic sensor, a compass, a global positioning unit, a fiber-optic interferometer, a piezo sensor, a strain gauge, a camera, and any combination thereof.
  • 13. The system of claim 11, wherein at least one motion capture device is adapted to be held and moved by a user and generate motion data in response to movements.
  • 14. The system of claim 11, further comprising: a data reception and transmission device, the data reception and transmission device adapted to receive motion data generated by a motion capture device and transfer the received motion data to the motion interpretation unit.
  • 15. The system of claim 14, wherein the data reception and transmission device receives and transmits data wirelessly.
  • 16. The system of claim 11, wherein the at least one motion capture device transmits and receives data wirelessly.
  • 17. The system of claim 11, further comprising: an editable token dictionary, the dictionary comprising data stored in memory and having multiple entries representative of tokens, the dictionary providing a reference for comparing, by the motion interpretation unit, the motion component samples against the entries in the token dictionary.
  • 18. The system of claim 11, further comprising: a command generator wherein the command generator receives processed tokens representative of captured motion and generates output commands associated with the processed tokens.
  • 19. The system of claim 18, wherein the output commands are adapted to operate an application external to the motion interpretation unit.
  • 20. The system of claim 11, further comprising: a parser, wherein the parser processes one or more tokens to form sentences, the sentences comprising groups of tokens.
  • 21. The system of claim 20, further comprising: a command generator, wherein the command generator combines one or more sentences with optionally one or more non-motion samples, and generates at least one output command associated with the combined one or more sentences and optional one or more non-motion samples, the output command adapted to operate an application external to the motion interpretation unit.
RELATED APPLICATIONS

The present application claims priority to U.S. Provisional Application No. 60/660,261, filed Mar. 10, 2005, entitled “System and Method for Interpreting and Reproducing Physical Motion,” which is incorporated herein by reference.

US Referenced Citations (60)
Number Name Date Kind
3792863 Evans Feb 1974 A
3806131 Evans Apr 1974 A
4839838 LaBiche et al. Jun 1989 A
4940236 Allen Jul 1990 A
4991850 Wilhlem Feb 1991 A
5067717 Harlan et al. Nov 1991 A
5233544 Kobayashi Aug 1993 A
5337758 Moore et al. Aug 1994 A
5472205 Bouton Dec 1995 A
5592401 Kramer Jan 1997 A
5598187 Ide et al. Jan 1997 A
5638300 Johnson Jun 1997 A
5694340 Kim Dec 1997 A
5697791 Nashner et al. Dec 1997 A
5779555 Nomura et al. Jul 1998 A
5791351 Curchod Aug 1998 A
5819206 Horton et al. Oct 1998 A
5826578 Curchod Oct 1998 A
5826874 Teitell et al. Oct 1998 A
5875257 Marrin et al. Feb 1999 A
5898421 Quinn Apr 1999 A
5903228 Ohgaki et al. May 1999 A
5907819 Johnson May 1999 A
6001014 Ogata et al. Dec 1999 A
6224493 Lee et al. May 2001 B1
RE37374 Roston et al. Sep 2001 E
6441745 Gates Aug 2002 B1
6529144 Nilsen et al. Mar 2003 B1
6821211 Otten et al. Nov 2004 B2
6908386 Suzuki et al. Jun 2005 B2
7158118 Liberty Jan 2007 B2
7176886 Marvit et al. Feb 2007 B2
7236156 Liberty et al. Jun 2007 B2
7239301 Liberty et al. Jul 2007 B2
7262760 Liberty Aug 2007 B2
7263462 Funge et al. Aug 2007 B2
20010024973 Meredith Sep 2001 A1
20020077189 Tuer et al. Jun 2002 A1
20020107085 Lee et al. Aug 2002 A1
20020123386 Perlmutter Sep 2002 A1
20030024311 Perkins Feb 2003 A1
20050032582 Mahajan et al. Feb 2005 A1
20050164678 Rezvani et al. Jul 2005 A1
20050212753 Marvit et al. Sep 2005 A1
20060025229 Mahajan et al. Feb 2006 A1
20060033711 Kong Feb 2006 A1
20060264260 Zalewski et al. Nov 2006 A1
20060287084 Mao et al. Dec 2006 A1
20060287086 Zalewski et al. Dec 2006 A1
20060287087 Zalewski et al. Dec 2006 A1
20070015558 Zalewski et al. Jan 2007 A1
20070015559 Zalewski et al. Jan 2007 A1
20070026869 Dunko Feb 2007 A1
20070050597 Ikeda Mar 2007 A1
20070052177 Ikeda et al. Mar 2007 A1
20070060228 Akasaka et al. Mar 2007 A1
20070060391 Ikeda et al. Mar 2007 A1
20070066394 Ikeda et al. Mar 2007 A1
20070072680 Ikeda Mar 2007 A1
20070178974 Masuyama et al. Aug 2007 A1
Foreign Referenced Citations (2)
Number Date Country
WO 0235184 Feb 2002 WO
PCTUS0220119 Jan 2003 WO
Related Publications (1)
Number Date Country
20060202997 A1 Sep 2006 US
Provisional Applications (1)
Number Date Country
60660261 Mar 2005 US