1. Field of the Invention
The present invention relates to a method of using motion states of a control device for control of a system.
2. Description of the Related Art
There is considerable prior art relating to the control of video game systems. A common way to control a video game is to use an interactive game controller. An interactive game controller typically includes multiple buttons, directional pads, analog sticks, etc., to control the play of a video game. An example of such an interactive game controller is disclosed in U.S. Pat. No. 6,394,906 to Ogata entitled “Actuating Device for Game Machine”, assigned to SONY Computer Entertainment, Inc.
Another approach is sensor-driven gaming. Nintendo Co., Ltd. has pioneered the use of sensors in gaming, and certain of their systems utilize a multi-button controller having a three-axis accelerometer. The Nintendo Wii system is augmented with an infrared bar. Other sensor-driven systems such as the SONY PlayStation Move and the Microsoft Xbox Connect use an optical camera to detect motion in time and space.
Yet another approach to system control includes gesture-based systems. As an example, U.S. Published Patent Application 2013/0249786 to Wang entitled “Gesture-Based Control System” discloses a method of control where cameras observe and record images of a user's hand. Each observed movement or gesture is interpreted as a command. Gesture-based systems are also employed to facilitate human-computer interfaces. For example, U.S. Patent Application 2012/0280905 to Stanislav et al. entitled “Identifying Gestures Using Multiple Sensors” focuses primarily on using adaptive sensors or mobile sensors for the use of recognizing continuous human gestures not related to gaming or system control. As another example, WIPO Publication No. WO2011053839 to Bonnet entitled “Systems and Methods for Comprehensive Human Movement Analysis” discloses use of dual 3D camera capture for movement analysis, which are incorporated with audio and human movement for neurological studies and understanding.
Since the advent of the Apple iPhone in 2007, which incorporated motion sensors, many games have used these sensors to incorporate user input motion. U.S. Pat. No. 8,171,145 to Allen, et al. entitled “System and Method for Two Way Communication and Controlling Content in a Game” disclose a method to connect to a web-enabled display on the same wireless network, and to control a video game played on the display using a smartphone. Their game control motions are similar to the Wii however, and are relatively simple motions.
Rolocule Games, of India, has introduced a smartphone-based tennis game, where the user plays an interactive tennis match swinging the phone to (1) serve, (2) hit backhand and (3) forehand shots. Rolocule also has a dancing game where the phone is held in the hand and motions are translated to those of a dancing avatar. Their method in both cases is to project the screen of the phone onto a display device via Apple TV or Google Chromecast. The game play in both cases is similar to prior art games for the Nintendo Wii.
U.S. Published Patent Application 2013/0102419 to Jeffery, et al. entitled “Method and System to Analyze Sports Motions Using Motion Sensors of a Mobile Device” describes a technique to analyze a sports motion using the sensors of a control device. Jeffery et al. use the gyroscope to define a calibration point, and the virtual impact point or release point of a sports motion is calculated relative to this point.
Smartphones can also be used to control complex systems, such as an unmanned aerial vehicle (UAV). U.S. Published Patent Application 2013/0173088 to Callou et al., entitled “Method for the Intuitive Piloting of a Drone by Means of a Remote Control,” discloses a method for control of a drone so that the user's control device motions and orientation are oriented with the drone flight direction and orientation. The motions of the control device are however limited, and use continuous motions of the control device.
Overall, the multi-button, multi-actuator interactive game controller is currently the best device to control a complex game, as the controller enables many dimensions of data input. There is a significant learning curve however, and the control commands are far from intuitive. For example, the controller does not simulate an actual sports motion, and complex button and actuator sequences are required to move an avatar through a virtual world and/or play sports games such as basketball or football. Furthermore, the controller is designed to work by connecting wirelessly to a gaming console.
The Wii remote provides a more realistic experience; however, the remote has several button controls and captures only gross motions of the user via the three axes accelerometer. Typical games played using this remote are simplified sports games. With the exception of bat or racquet motions, the user's avatar responds in a pre-programmed way depending upon the gross sports motion of the player.
Current smartphone based sports games are similar to the Wii—avatar positioning is selected from a small number of predetermined movements (typically a maximum of three) based upon the swing motion. Tennis is a primary example—the three possible motions are serve, forehand and backhand. These motions result in the avatar serving the ball or moving left or right on the court to hit the ball in response to the swing motion—however, the player cannot move the avatar towards the net, move backwards, run diagonally, jump in the air or hit a lob shot, as examples.
Furthermore, current commercially available accelerometers in mobile phones are “noisy” and the gyroscope has drift over a few seconds so that the control device requires re-calibration periodically. Prior art mobile games require a user-defined manual calibration point of the motion sensors. This limitation requires significant simplification of the possible motions for a continuous game, or requires repeated manual calibration, which is not an optimal human interaction.
This invention is for control of a system using motion states of a control device. The methods and system of the invention enable complex system control typically controlled by complex controllers, but does not require any buttons or actuators, or video capture of body movements or gestures. An embodiment of the invention utilizes the gyroscope and accelerometer motion sensors of a control device such as a smart phone, smart watch, fitness band, or other device with motion sensors connected, via a cable or wirelessly, to a processor for analysis and translation.
A Motion state is defined as one of a plurality of predefined ranges of orientation around an axis in three-dimensional space. In an embodiment, the combination of rotational motions (from the gyroscope) and acceleration (from the accelerometer) are combined into gravity sensor data, such that in an embodiment a plurality of 43=64 states of which at least 24 are unique for a control device are defined. One or multiple sequential motion states are defined via a state table to map into system control inputs. That is, a series of motion states defines a new and inventive motion control language with which one can control a system, such as a video game, a robot, a car, and airplane, an aerial drone or an orchestra, for example.
An appropriate analogy is binary ‘words’ in a digital computer, where as an example 64 sequential ones and zeroes (transistor logic gates output highs and lows), are mapped to correspond to unique machine language instructions for a microprocessor. However, ‘words’ in the inventive motion state language are not of a fixed length, hence context is used to uniquely define the system action resulting from a motion sequence.
The new and inventive method described herein to control a system, such as a video game includes: (1) a motion state library defining the sequence of motion states and the corresponding system output(s), (2) a state diagram which defines the multiplicity of motion states possible and their topographical connectedness for a particular system (required if there are temporal dependencies), and (3) a state and sequence analyzer such that specific system events, such as game actions, are triggered upon detection of single or multiple motion state events and dependent upon the state diagram for the system.
There are at least four significant advantages of the invention:
The method is extensible to control a plurality of games, systems and technologies. These and other aspects, features, and advantages of the present invention will become apparent from the following detailed description of preferred embodiments, which is to be read in connection with the accompanying drawings.
a)-(f) illustrate example motion states of a control device;
a) illustrates an example of the method for a single player basketball game, wherein movement of an avatar is controlled by hand gestures corresponding to sequences of motion states;
b) illustrates the corresponding motion state table for the example illustrated in
a) and (b) illustrate state diagrams for a more complex basketball game corresponding to offense and defensive avatar control, respectively;
a)-(b) illustrate use of the method for the game of American Football including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the invention for the game of tennis including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the invention for the game of baseball including hand motions of the control device and the corresponding state diagram for a fielder catching and throwing the ball;
a)-(b) illustrate use of the method for the game of hockey including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the method for the game of volleyball including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the method for the game of soccer including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the method for a fishing game including hand motions of the control device and the corresponding state diagram;
a)-(b) illustrate use of the method for a third person shooter game including hand motions of the control device and the corresponding state diagram for an avatar navigating a virtual battlefield, running/jumping, and shooting;
a)-(b) illustrate use of the method for control of a UAV, including hand motions of the control device and the corresponding state diagram, and;
For clarity and consistency, the following definitions are provided for use herein:
As used herein, an output action is a system response to a trigger event. For example, a car turning as a result of rotating the steering wheel, or letters appearing on a computer display in response to typing on a keyboard.
As used herein, a control device refers to a portable device having motion sensors, including, but not limited to, an accelerometer and a gyroscope. In certain embodiments, the motion sensors are integral to the control device. However, in other embodiments, the motion sensors can include external motion sensors. In certain embodiments the control device may have integrated memory and a processor, and in other embodiments the processing may be enabled in a console or PC based system or other mobile device, connected via a cable or wirelessly to the control device.
As used herein, a web-enabled display is any display device with the capability to connect to the Internet and display a web page.
As used herein, sensor data includes any data obtained from a sensor.
As used herein, earth gravity vector is the vector perpendicular to the surface of the earth with an acceleration of approximately 9.8 m/sec2 towards the center of the earth.
As used herein, gravity data is the three-dimensional vector output from a gravity sensor. The coordinate system used is non-limiting.
As used herein, attitude describes the orientation, or angular position, of an object in 3-dimensions, relative to an initial starting point.
As used herein, attitude data is the integral of angular velocity over time in a plane tangential to the surface of the earth.
As used herein, a motion state refers to one of a plurality of predefined ranges of orientation around an axis in three-dimensional space. The entire set of motion states covers the entire range of orientation around each of the axes of an orthogonal coordinate system. However, a particular system may only consider certain motion states and/or motion state sequences to be applicable.
As used herein, the global coordinate system is an orthogonal coordinate system affixed to the earth.
As used herein, the object coordinate system is an orthogonal coordinate system affixed locally to the control device.
As used herein, gravity states are ranges of orientation wherein rotations cause changes of the gravity sensor data. These ranges of rotation are around axes perpendicular to the earth gravity vector.
As used herein, attitude states are ranges of rotations entirely in a plane tangential to the earth.
As used herein, a motion state table is the set of motion states applicable to a particular system.
As used herein a motion state sequence is a series of consecutive motion states.
As used herein, the motion state library is the set of motion state sequences and the corresponding output actions for a particular system. The sequences may not be unique, so that a state sequence may have a plurality of corresponding system output actions.
As used herein, a motion state diagram defines the sequential connectedness of states in a particular system.
As used herein, motion state logic is the mapping of motion state sequences to system actions via the motion state library with the constraints of the motion state diagram.
Although the architecture of an Apple iPhone 5S is shown in
The method described herein is not limited to control devices such as Apple and Android smartphones, and the control device is not required to connect to the Internet. In an illustrative embodiment, the control device can be the Nintendo Wii controller with optionally the Motion-Plus gyroscope add on. The Nintendo Wii controller is connected via a Bluetooth connection to a gaming console and senses acceleration in 3-axis using an ADXL330 accelerometer. The Wii remote also features a PixArt optical sensor, which in combination with a 10 LED Sensor Bar, physically connected to the game console, allows the determination of where the Wii controller is pointing.
As will be described in greater detail, an important aspect of the present invention is the quantization of motion sensor data obtained from a control device 300 into a relatively small number of discrete “motion states” and to use specific sequences of these motions states for control of a system. A motion state refers to one of a plurality of predefined ranges of orientation around an axis in three-dimensional space. The entire set of motion states covers the entire range of orientation around each of the axes of an orthogonal coordinate system. However, a particular system may only consider certain motion states and/or motion state sequences to be applicable. A motion state may be determined from a user's movement by the motion sensors 304 of the control device 300 held in the hand. In other embodiments, the motion states may be determined from the external motion sensors 310.
Preferably, a “motion state library” can be employed to define the set of motion state sequences for a particular system, where each state sequence corresponds to at least one output action. Each particular system output action is derived by the state logic for a particular system, wherein a particular state sequence, predefined in the state library, is mapped to the appropriate output action of the system via state logic rules. That is, the system action for a state sequence is determined by where the state sequence occurs in a state diagram.
For the control device 300 we define an object coordinate system, which is a Cartesian coordinate system (X, Y and Z), such that Y is along the long axis of the device, X is perpendicular in the short axis of the device, and Z is perpendicular to the face. We similarly define a Cartesian global coordinate system (Xg, Yg, Zg) such that the Zg axis is perpendicular to the surface of the Earth and Xg, Yg are in the plane tangential to the surface of the earth. The transformation from the global to object Cartesian coordinate system or vice versa is straightforward following well-known methods of matrix algebra given the respective angles of rotation of the axes. It is to be understood that various other coordinate systems may be defined in space, and that the choice and placement of the coordinate system described herein is non-limiting. However, in practice it has been found that the coordinate system described herein is an elegant and useful approach for many applications.
a) to (f) illustrate exemplary motion states derived from 90-degree rotations of a gravity sensor, all starting 45 degrees from the axes of the global coordinate system Xg, Yg, Zg.
It is to be understood that the coordinate system and the bisected angles may vary in a particular embodiment. That is, the motion states can be defined to be arbitrary rotational angles and it may be advantageous in a particular embodiment to bisect the arc in one dimension more than another, so as to define motion states with higher fidelity in a particular direction. It is to be further understood that the number of states in a particular dimension may vary from the provided examples. The illustrated embodiments are merely exemplary illustrations for a preferred embodiment with 90-degree states: if one were to choose 45-degree rotations the number of states would double, for example. Furthermore, there may be a multitude of control device sensors used to detect these states, and hence the specific sensors used, and the specific outputs of a sensor used to define a particular state, are understood to be non-limiting.
It is to be further understood that multiple sensors may detect different states simultaneously, and that while the invention is illustrated by examples with a single control device 300 the method is extensible to multiple state analysis, with a control device 300 held in one hand and additional sensors 310 on a wrist, for example. These examples are understood to be non-limiting as the method is extensible to an arbitrary number of sensors attached to different parts of the body, such as the ankles, elbows, knees, and head.
Referring to
As the control device 300 is rotated in space, transitions from one state to another can be detected by looking for when the X, Y, Z gravity data (GravityX, GravityY, GravityZ) has crossed the boundary from one state to another. For the 90-degree states of the exemplary illustration, the transitions between states are demarked by the crossings of projection of the earth gravity vector at 45 degrees. That is, for gravity states the object coordinate system is understood to be rotating in space about axes that are tangential to the earth gravity vector, see
Typical gravity data outputs of motions sensors 304 have maximum ranges from +9.8 m/sec2 to -9.8 m/sec2. The magnitude of the earth gravity vector at 45 degrees projected towards the center of the earth is given by:
g
Z
=g sin)(45°=9.8 m/s2×0.707=6.929 m/sec2˜7 m/s2.
In an embodiment, we define ranges of gravity sensor data so that motion states can be easily detected in one of three ranges of gravity data: High (greater than +7 m/s2), Middle (between −7 and +7 m/s2), and Low (less than −7 m/s2). So the motion states
For example, the Pitch-Top motion state,
In an alternate embodiment, for attitude states we use the magnetic compass sensor to orient the Xg, Yg axes of the global coordinate system. In this embodiment, attitude state changes are detected relative to magnetic North for the user wherein an offset angle is used to place the global coordinate system, where the offset angle is the difference between the average attitude of the user and magnetic North. In an embodiment we calibrate the attitude of the control device using an average attitude direction of the first few gravity state motions of the user, preferably three in an embodiment. The calibration provides an offset angle relative to the magnetic North of the user, from which we can orient the global coordinate system. However, the calibration method is understood to be non-limiting, and an application may require the user to hold the control device in their preferred attitude direction for a period of time, preferably holding still for one second, before initiating a motion state sequence, or as an alternate embodiment the calibration may be executed at each gravity state change, as an example. The use of the magnetic compass sensor has the advantage of orienting the user relative to a fixed direction on the Earth, which may be useful for applications including UAVs, for example.
Accelerometer and magnetic compass sensor data is noisy, however, and often contains spikes from high frequency movements. In an embodiment it may be advantageous to apply a low pass filter to accelerometer and magnetic sensor data, or preferably a Kalman filter if there are constraints on the motion states, in order to remove the high frequency component; see, for example, U.S. Pat. No. 8,326,533 to Sachs et al., entitled “Apparatus and Methodology for Calibration of a Gyroscope and a Compass Included in a Handheld Device,” which is incorporated by reference herein in its entirety. Sensor fusion techniques are well known in the art; see, for example, U.S. Pat. No. 8,441,438 to Ye et al., entitled “3D pointing device and method for compensating movement thereof.” In an embodiment, a sensor fusion method combining accelerometer sensor and gyroscope sensor data and/or magnetic sensor and gyroscope sensor data is used to more accurately calculate the gravity data and/or magnetic compass data, respectively.
As an illustrative example for the pitch-up motion state, see
Note that the embodiment detects transitions between states and is robust so that the exact gravity or attitude sensor reading is not required to identify a change in state. In an embodiment, it is useful to define a threshold range A, approximately 10% of the threshold values in gravity and attitude data, so that a state change is recorded to have occurred if gravity or attitude data is plus or minus Δ in the range of the state transition. Hence, if the control device is held close to a state change the state does not change unless it has crossed the threshold in gravity or attitude data plus or minus Δ. The Δ used in an embodiment is understood to be non-limiting.
These data from 135 are then passed to the sequence analyzer 170. The sequence analyzer 170 maps the motion state sequence to the appropriate system control output 201. The motion state library 145 contains predefined motion state sequences and the input state sequences 135 are monitored in 140 for matches 147 against the motion state library. If no motion state sequence is found the method returns 148 to the state analyzer 100 to continue monitoring for motion state changes. If a motion state sequence is found, state logic 175 is applied which determines the correct trigger event 180 given the constraints of the motion state diagram 150. The method 250 output is the correct trigger event 180 that is the input to control the system 200, which in turn creates that output action 201. Note that the state logic 175 can be complex and take into consideration various factors beyond just the change of state of the controller.
As an example of complex logic 175 for an embodiment applicable to sports games, the state logic 175 can trigger the analysis of sports motions, which are then input to a gaming graphics engine 200. The sports motion analysis follows the method of Jeffery, et al. U.S. Published Patent Application 2013/0102419, “Method and System to Analyze Sports Motions Using Motion Sensors of a Mobile Device”, wherein the calibration point at the beginning of the sports motion is selected as a transition to the appropriate motion state in the related motion state sequence.
It is to be understood that many variations of the method 250 are realizable, and that the steps may be undertaken in a different order in a particular embodiment, and may be distributed across processing devices, and therefore the method in this example is non-limiting.
Preferably, the method 250 is asynchronous, with events arriving and being detected as they occur, however in an alternate embodiment the method 250 may be synchronous with clocking of the state analyzer engine 100 and the sequence analyzer 170 so that the motion state sequence has a well-defined periodicity and, if no motion state change occurs in a clock period, the motion state is duplicated in the motion state sequence table. In this example the sequence analyzer has logic 140 to manage duplicates and accurately detect state sequences.
The present invention will be further clarified by the following example of a basketball game implemented using techniques described herein, according to an embodiment of the present invention. For illustrative clarity, the example has a limited number of states and assumes an embodiment where the control device 300 is held in the hand 010 of the user. As discussed previously, many possible embodiments are possible with current and future technology and sensor configurations; hence the example is non-limiting.
As an illustrative example, in
The actual sports motion analysis for the basketball throw is computed separately from the motion state sequence analysis.
In an embodiment, we define a “special move” as a motion state sequence that occurs in a predefined time interval. As an illustrative example: four states in succession executed in less than 3 seconds.
The “special move” is an illustrative example of how the output action 201 is not necessarily just a predetermined ‘stock’ action and the state logic 175 can involve complex analysis. For example the “windmill dunk” special move requires four state changes, and results in a predefined windmill dunk avatar sequence. However, additional variables including directions, speed and velocity of the various state transitions are used to make a slow vs fast windmill dunk, or a complete miss if the state sequence was executed with bad timing, as examples. Hence, additional data including, but not limited to, data from the sensors 304 maybe combined in the state logic 175 to create a complex output action 201 that is not pre-determined.
The exemplary basketball game motion, motion state diagram, and motion state library presented herein are simplified for the sake of clear exposition.
a) and (b) illustrate user 010 basketball game play in a preferred embodiment.
As shown, the three major components of the gaming platform 500 are the control devices 300, a gaming server 400, and display devices 350. The gaming server 400 includes a gaming rules engine 450 that manages a plurality of games being played. As shown, the gaming rules engine 450 has access to a user database 455, and a gaming resources database 460. The user database 455 stores login information and game information. For basketball, the game information can include swing data for each shot made during the game, the player's current score, current level number, etc. The gaming resources database 460 can include graphical content for simulating the game on the display device 350.
In the illustrated embodiment, the gaming server 400 is cloud-based enabling global connectivity via the Internet 550. For each user, the user's control device 300 and display device 350 can be simultaneously connected to the gaming server 500 through separate and distinct Internet connections. The control device 300 transmits data, including analyzed motion states and state sequences and other data to the gaming server 500; in turn, the gaming server 500, facilitates display of gaming media at the display 350 through a separate Internet connection. In an embodiment, a light weight gaming graphics engine 420, in the form of a software application, can be pushed or downloaded to a suitable Web-enabled display device 350 where a substantial amount of the logic of the gaming rules engine 450 is encoded, and the gaming logic engine 420 can then perform much of the work otherwise to be performed directly at the gaming server 400.
In the following description of the present invention, exemplary methods for performing various aspects of the present invention are disclosed. It is to be understood that the methods and systems of the present invention disclosed herein can be realized by executing computer program code written in a variety of suitable programming languages, such as C, C++, C#, Objective-C, Visual Basic, and Java. It is to be understood that in some embodiments, substantial portions of the application logic may be performed on the display device using, for example, the AJAX (Asynchronous JavaScript and XML) paradigm to create an asynchronous web application. Furthermore, it is to be understood that in some embodiments the software of the application can be distributed among a plurality of different servers (not shown).
It is also to be understood that the software of the invention will preferably further include various Web-based applications written in HTML, PHP, Javascript, XML and AJAX accessible by the clients using a suitable browser (e.g., Safari, Internet Explorer, Mozilla Firefox, Google Chrome, Opera).
In a preferred embodiment 500 we implement the method 250 as a native application 306 for both Apple IOS and Android control devices 300, the gaming engine 450 using Amazon web services, and the web-enabled display 350 for all major commercially available web browsers (Chrome, IE, Firefox and Safari). Preferably, we use the Unity 3D 4.5.2 graphics engine called from the application 306 and installed in an appropriate HTML 5.0 web page of the web-enabled display 350.
Data capture on an Apple Device is enabled via the Apple iOS CMMotionManager object to capture device motion data, attitude, accelerometer and gravity. We use the Gravity method of CMAcceleration subclass of CMDeviceMotion object to capture the gravity sensor data. We use the Attitude method of CMAttitude subclass of CMDeviceMotion object to capture the attitude sensor data. We call startDeviceMotionUpdatesToQueue:withHandler method of the CMMotionManager object to begin the data capture. Data is captured at 1/100th of second's intervals. We set the data capture interval using deviceMotionUpdateInterval property.
On an Android Device we capture the sensor data using the SensorManager class. An instance of this class is created by calling Context.getSystemService( ) with SENSOR_SERVICE as a parameter. To capture the gravity data, we call getDefaultSensor method of SensorManager class with passing the parameter TYPE_GRAVITY. To capture the gyroscope data, we call getDefaultSensor method of SensorManager class with passing the parameter TYPE_GYROSCOPE. We use the registerListner method of SensorManager class to start the data capture and to set the rate of the data capture. For both Apple and Android we use these sensor data as inputs to the programmed method 250 within the native application 304.
We communicate data in the platform 500 using web socket connections. The control device 300 uses the WebSocket API to send data to the gaming server 400, and the browser 350 where the Unity 3D graphics engine is installed on the control device 300 and the web-enabled display 350. A web socket connection with the browser is persistent for the duration of a played game.
We use the WebSocket API to receive data from the control device 300 and communicate with the Unity 3D game engines. As an example, when UnityAndroid completely loads, it sends a callback to our native app “gameLoadedOnDevice()”. In the UnityWeb case, it sends a socket call back to a native browser app. The native browser app sends back the details of the play result, to UnityWeb by calling unity.sendMessage(“unity function”). To replicate the device's behavior on the web-enabled display 350, UnityAndroid or UnityiOS does all the socket communication with the server via the native app only. Appropriate methods are defined in the native app 306 that handles the socket calls. Unity just calls those methods whenever needed. The response to network calls is also listened for by the native app and it communicates these data back to Unity via unity.sendMessage(“unity function”).
The method 250 algorithm keeps running in the background when a user 010 starts the UnityAndroid or UnityiOS. Whenever the method 250 detects the state sequence 135 defined in state library 145 and subject to the state diagram 150 and state logic 175, the method 250 sends the trigger event 180 to the UnityAndroid or UnityiOS and web socket call to UnityWeb. It is to be understood that the software and systems calls disclosed in this preferred embodiment will change in the future, and therefore the embodiment is non-limiting.
For clarity in the basketball example, we illustrated the method using a single control device 300 with integrated motion sensors 304; however this example is non-limiting. The method 250 can be extended to multiple sensor 304 inputs 001, from the control device 300 and other connected devices 310. In a preferred embodiment, a motion state analyzer 100 is used for each of the control device sensor 310 inputs. The sequence analyzer 170 is then extended to receive multiple state sequences 135, wherein the state library 145 defines combinations of multiple-sensor 304 states, for defensive blocking and steeling with both arms as an example, and the state diagram 150 is similarly extended so that stage logic outputs the correct trigger event for the multiple state sequence input.
In the following description we illustrate a multitude of possible variations of the present invention to video games such as football, tennis, baseball, hockey, volleyball, soccer, shooter, and fishing through their respective motion state diagrams. These examples are understood to be illustrative and non-limiting. For brevity, we disclose embodiments via the respective hand motions 075 and the motion state diagrams 150 for each example, since these diagrams, with the motion state table, enable the method 250.
a) illustrates hand motions 075 to control an avatar quarterback (QB) of a football game. The motions are primarily rotations in the yaw states, moving completely through back, top, forward and bottom with some variant states off of yaw-forward. There is one pitch state (pitch-up-left) which is reachable from yaw-bottom.
a) illustrates the hand motions 075 of a tennis player and
a) illustrates the hand motions 075 of a baseball infielder and
It is to be understood that many additional games may be derived from the hand motion states 075 and the motion state sequences 150 illustrated in FIGS. 5,6 and 12-19. Specifically badminton, squash, and handball are derivatives of the illustrative example for tennis,
The method described herein has many applications to systems and control other than computer games.
The illustrative inventive method 250 can be implemented in the control device 300, or can be distributed across both the control device 300 and the system controller 600. As an illustrative example, the state analyzer 100 may be implemented in the control device 300, and the states 135 passed to the system controller 600 wherein the sequence analyzer 170 is implemented. It is understood that there are many possible variations of the implementation possible by those skilled in the art, and hence the example is non-limiting.
An embodiment of the architecture
The basic LEGO Mindstorms EV3 kit comprises a building instruction for the starter robot, TRACK3R Connecter cables, 1 USB cable LEGO Technic Elements: 594 pieces, 1 EV3 Brick (600), 2 Large Interactive Servo Motors, 1 Medium Interactive Servo Motor, 1 Touch Sensor, 1 Color Sensor, 1 Infrared Sensor, and 1 Infrared Beacon.
The EV3 processor 600 may be programmed with the LEGO MyBlocks visual object oriented programming language and users can also program in LabVIEW and Robot C. Programs may be written and compiled on an appropriate Apple or Windows PC and are transferred to the EV3 processor via a USB cable connection. The user then runs the EV3 programs via touching the screen of the EV3 Brick 600, or via the EV3 Robot Commander App on the Blue Tooth connected iPad/iPod/iPod or similar Android device.
EV3 software is an open-source platform, and Robot C is a C based programming language that can access the API library for the EV3 Brick. These API's include standard API's to connect to and communicate with iOS applications 306 via Blue Tooth. Hence, the method 250 can be implement on the controller 300, as described previously herein, and used to control the Mindstorms Robot via a Blue Tooth connection to the EV3 brick.
In an embodiment, we define motion states of the control device 300 to trigger software program execution of the EV3 brick 600. So that, as an illustrative example, the motion states p-down, p-up trigger the robot moving forward, p-down left and p-down right trigger turns left and right, respectively, and p-up, p-left trigger shooting of plastic balls. The example is illustrative however, and there is considerable flexibility of the design and software programming possible for the EV3 robot, with the ability to integrate the Servo Motors, Touch Sensor, Color Sensor, Infrared Sensor, and Infrared Beacon via the servo motor output interface 609 and the sensor interface 610, into new and unique program modules, each of which can be triggered by a motion state sequence of the control device 300.
As another illustrative example, the DENSO VP robot arm has AC servomotors 611 for each of the 6-axis of motion and is controlled by the RC8 Robot Controller 600, and is programmable by the DENSO robot language (PacScript). The ORiN2 SDK enables application development and integration of PC Visual Basic, C++, Delphi or Labview to the DENSO robot and sensors. Hence, one can integrate a control device 300 via the ORiN2 SDK for an embodiment of the architecture 650 for an industrial robot 660.
For an alternate robotic embodiment illustrative of the architecture
An exemplary illustrative embodiment of a UAV is the DJI Phantom 2 Vision+ Quadcopter with Gimbal-Stabilized 14MP, 1080p Camera manufactured by DJI and which is primarily designed for areal photography applications. The system consists of four dc motor propellers 611, a GPS receiver 610, and an A2 IMU flight controller 600, all powered by a 5200 mA/h Lithium-polymer batter 607. The A2 IMU flight controller 600 is the “brains” of the system, and consists of a processor 603, memory 605, and various sensors 610 such that stabilized flight is enabled; whereby the flight controller application software 606 varies the pitch and rotational velocity of the propellers 611, and the UAV system 650 is controlled by a ground based control device 300.
Callou et al. discloses a method for control of a UAV 655 such as the DJI Phantom 2 Vision+ whereby a control device 300, with an integrated accelerometer and gyroscope such as an iPhone/iPod Touch/iPad, is rotated in the stationary pilots hands to control the flight of the UAV 655. The bidirectional exchange of data with the UAV 655 and the control device 300 is enabled by Wi-Fi (IEEE 802.11) or a Bluetooth link 308. US 20130173088 A1 however makes use of continuous motions of pitch and roll of the control device, rather than motion states.
As an illustrative example using motion states to control the flight of a UAV 655, in
As a final example, in
The inventive method 250 with multiple sensors could be further used for the control of the orchestra, wherein the primary control device 300 is held in the right hand by the user 010 and a secondary wirelessly connected control device 310 is attached to the left wrist of the user 010. States can be defined for the 310 sensors, such that changing the motion state of 310 from p-down to p-up could trigger the raising of the left arm of the robot 660, which in-conjunction to the motion state sequences from the device 300 controlling the right arm of the robot 660, would signal the orchestra to play with increased intensity, for example.
While this invention has been described in conjunction with the various exemplary embodiments outlined above, it is evident that many alternatives, modifications and variations will be apparent to those skilled in the art. Accordingly, the exemplary embodiments of the invention, as set forth above, are intended to be illustrative, not limiting. Various changes may be made without departing from the spirit and scope of the invention made without departing from the spirit and scope of the invention.