A gesture is a pose, action, or motion that communicates the intent of a human. Gesture detection is the ability of a computer to recognize a human gesture. In this context, gesture detection includes analyzing a human subject's full or partial body while the body is moving or static to determine whether or not a particular gesture is being performed. A binary determination and related confidence can be made for each of one or more different possible gestures—e.g., PlayerIsKicking (85% confident).
Some gesture detection systems include one or more sensors that are used to observe a human subject. For example, a gesture detection system may include a depth camera, a visible light camera, and/or other sensors. Furthermore, a gesture detection system may include a processing module configured to analyze data from the sensors and generate a virtual skeleton that models a pose of the human subject.
Even when a human subject can be perfectly modeled with a virtual skeleton, it remains a difficult challenge to determine whether or not the human subject is intending to perform a particular gesture.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
According to one aspect of the disclosure, a gesture detection module is trained to determine if a gesture model, such as a virtual skeleton modeling a human subject, has performed a particular gesture. The gesture detection module is trained via machine learning to identify one or more features of the gesture model and indicate if the feature(s) collectively indicate the particular gesture.
The herein disclosed gesture detection system uses machine learning to recognize human gestures. One or more machine learning modules are trained to recognize complex patterns in example observation data. Once sufficiently trained, the machine learning modules can be used to interpret previously unseen observation data and determine if such observation data corresponds to a particular gesture. While primarily described below with reference to depth camera analysis of full body gestures, different types of user gestures may be detected via a variety of different platforms without departing from the scope of this disclosure. As nonlimiting examples, the methodology described below can be used to detect finger gestures on a touch screen, hand gestures via a visible light camera, or game-controller gestures via motion analysis using accelerometers, gyroscopes, and/or camera tracking.
The computing system 10 may include a sensor input to receive observation information from one or more sensors. As a non-limiting example, the computing system may include a universal serial bus configured to receive depth images and/or color images from one or more input devices including a depth camera and/or a visible light camera.
Game player 18 is tracked by depth camera 22A so that the movements of game player 18 may be interpreted by computing system 10 as controls that can be used to affect the game being executed by computing system 10. In other words, game player 18 may use his or her physical movements to control the game without a conventional hand-held game controller or other hand-held position trackers. For example, in
Depth camera 22A may also be used to interpret target movements and/or static poses as operating system and/or application controls that are outside the realm of gaming. Virtually any controllable aspect of an operating system and/or application may be controlled by static and/or dynamic gestures of game player 18. The illustrated scenario in
During image collection 28, game player 18 and the rest of observed scene 24 may be imaged by a depth camera 22A. In particular, the depth camera is used to observe gestures of the game player. During image collection 28, the depth camera may determine, for each pixel, the depth of a surface in the observed scene relative to the depth camera. Virtually any depth finding technology may be used without departing from the scope of this disclosure. Example depth finding technologies are discussed in more detail with reference to
During depth mapping 30, the depth information determined for each pixel may be used to generate a depth map 32. Such a depth map may take the form of virtually any suitable data structure, including but not limited to a depth image buffer that includes a depth value for each pixel of the observed scene. In
During skeletal modeling 34, one or more depth images (e.g., depth map 32) of a world space scene including a computer user (e.g., game player 18) are obtained from the depth camera. Virtual skeleton 36 may be derived from depth map 32 to provide a machine readable representation of game player 18. In other words, virtual skeleton 36 is derived from depth map 32 to model game player 18. The virtual skeleton 36 may be derived from the depth map in any suitable manner. In some embodiments, one or more skeletal fitting algorithms may be applied to the depth map. For example, a prior trained collection of models may be used to label each pixel from the depth map as belonging to a particular body part, and virtual skeleton 36 may be fit to the labeled body parts. The present disclosure is compatible with virtually any skeletal modeling technique. In some embodiments, machine learning may be used to derive the virtual skeleton from the depth images.
The virtual skeleton provides a machine readable representation of game player 18 as observed by depth camera 22A. The virtual skeleton 36 may include a plurality of joints, each joint corresponding to a portion of the game player. Virtual skeletons in accordance with the present disclosure may include virtually any number of joints, each of which can be associated with virtually any number of parameters (e.g., three dimensional joint position, joint rotation, body posture of corresponding body part (e.g., hand open, hand closed, etc.) etc.). It is to be understood that a virtual skeleton may take the form of a data structure including one or more parameters for each of a plurality of skeletal joints (e.g., a joint matrix including an x position, a y position, a z position, and a rotation for each joint). In some embodiments, other types of virtual skeletons may be used (e.g., a wireframe, a set of shape primitives, etc.).
Skeletal modeling may be performed by the computing system. In particular, a skeletal modeling module may be used to derive a virtual skeleton from the observation information (e.g., depth map 32) received from the one or more sensors (e.g., depth camera 22A of
The above described virtual skeleton is provided as a nonlimiting example of a gesture model. Virtual skeletons are well suited for modeling the full-body posture and/or movements of a human subject. Other aspects of a human subject may be modeled with other types of gesture models without departing from the scope of this disclosure. As a nonlimiting example, touch patches measured by a touch pad or touch screen may model a user's finger gestures when performing touch inputs. As another example, a color image may model a user's fingers/hands when performing hand gestures.
As introduced above, even if a human subject is perfectly modeled with a virtual skeleton or other gesture model, it remains a difficult challenge to determine if a human subject is performing any particular gesture. For example, it may be difficult to accurately detect gestures by writing particular detection algorithms in the form of executable code for each different gesture. A very large number of input parameters, threshold values, and weights must be tuned and understood for such manually coded gesture detection algorithms to work reliably for a wide variety of different people in a wide variety of different environments. Manually fine tuning the large number of input parameters, thresholds, and weights can be extremely time consuming and frustrating. Furthermore, maintaining the code can be very cumbersome. For example, in the case of a gesture model in the form of a virtual skeleton, if any of the characteristics of the skeletal data provided by the skeleton modeler change (e.g., changing the number of joints of a virtual skeleton, how those joints are tracked, how those joints behave when occluded, and/or the type and amount of noise present on the joints) then all input parameters, thresholds, and weights must again be recognized, understood, and manually tuned for each different gesture. In many cases, the entire gesture detection algorithm may need to be rewritten. If manually coded algorithms are not working well for a particular type of human subject (e.g., small child or heavy adult) and/or a particular type of environment, the algorithm must be recoded manually. Furthermore, manually coded algorithms typically rely on some history of previous frames to detect a gesture, thus introducing latency.
These challenges can be overcome by using machine learning to detect gestures, as indicated at 38. As used herein, machine learning is used to refer to artificial intelligence programming that configures a computer to recognize complex patterns in data. As described in more detail below, a gesture detection module trained via machine learning can be used to accurately assess whether or not a human subject is performing a particular static or dynamic gesture.
During game output 40, the physical movements of game player 18 as recognized via skeletal modeling 34 and/or gesture detection 38 are used to control aspects of a game, application, or operating system. In the illustrated scenario, game player 18 is playing a fantasy themed game and has performed a spell throwing gesture. The machine learning gesture detection recognizes the gesture, and displays an image of the hands of a player character 16 throwing a fireball 42. In some embodiments, an application may leverage various graphics hardware and/or graphics software to render an interactive interface (e.g., a spell-casting game) for display on a display device.
As shown in
It is to be understood that multiclass training may additionally and/or alternatively be performed in which two or more gestures are trained at the same time. As such, a single gesture detection module may be configured to return which gesture has been performed from a number of different possible gestures and/or return a confidence value for each possible gesture for which the module is trained.
This machine learning approach provides a holistic gesture detection system based on training data (e.g., observation information 48) that requires no code changes from detecting one gesture to another. In other words, a gesture detection module can be created for any desired gesture simply by training that gesture detection module with appropriate training observation information. This approach eliminates the need for manually fine tuning a large number of input parameters, thresholds, and weights, and can also compensate for latency in that the machine learning module can learn the point when a human's movements are intended to be a particular gesture, and not just the point when the gesture finishes.
A set of features may be defined on which the machine learning module will learn to recognize complex patterns. As indicated in
As a non-limiting example, the velocity of a hand joint is a virtual skeleton feature that may be a strong indicator as to whether a human subject is intending to complete a throw gesture. As another example, the relative position of the hand joint compared to the head joint may be another strong indicator. As still another example, the relative position of the elbow joint compared to the shoulder joint may be another strong indicator. A gesture detection module may be trained to consider virtually any number of these features. By analyzing many different instances of positive gestures and negative gestures, the gesture detection module can use machine learning to determine which features serve as the strongest indicators.
The following are provided as non-limiting examples of possible features. It is to be understood that other features are within the scope of this disclosure.
The vertical body axis angle is an example feature derived from the virtual skeleton. A vertical axis can be defined between a spine joint and a center shoulder joint of the virtual skeleton. An angle can be calculated between this vertical axis and any/all other joints in the virtual skeleton. Any of these angles may serve as features.
The horizontal body axis angle is another example feature derived from the virtual skeleton. A horizontal axis can be defined between a center shoulder joint and either the left or right shoulder joint of the virtual skeleton. An angle can be calculated between this horizontal axis and any/all other joints in the virtual skeleton. Any of these angles may serve as features.
A comparison of an attribute of a first joint of the virtual skeleton and an attribute of a second joint of the virtual skeleton may be used as another example feature derived from the virtual skeleton. For example, a simple subtraction of two joint positions can reveal if one joint is in front of another, above the other, or to the left or right of the other. Such differences can be calculated between any/all other joints in the virtual skeleton. Furthermore, aspects other than joint position can be compared between different joints. Any of these differences or other comparisons may serve as features.
Angular and linear joint speed and velocity are other example features derived from the virtual skeleton. The linear speed and/or velocity of any/all joints may be calculated by dividing the difference in a particular joint's position in two different frames by the elapsed time between those frames. Similarly, the angular speed or velocity may be calculated by comparing the joint angle in two different frames (e.g., angle between shoulder, elbow, and hand in successive frames). Any of these speeds and/or velocities may serve as features.
Angular and linear joint acceleration are other example features derived from the virtual skeleton. The linear acceleration of any/all joints may be calculated by dividing the difference in a particular joint's velocity in two different frames by the elapsed time between those frames. Similarly, the angular acceleration may be calculated by comparing the angular velocity in two different frames. Any of these accelerations may serve as features.
Joint force is another example feature derived from the virtual skeleton. The joint force of any/all joints may be calculated by multiplying a joint acceleration by an estimated mass of a body part corresponding to that joint. Any of these forces may serve as features.
Joint power is another example feature derived from the virtual skeleton. The joint power of any/all joints may be calculated by multiplying a joint force throughout two or more frames by the distance the joint moves during those frames. Any of these powers may serve as features.
Joint attribute (e.g., angle) over key frames is yet another example feature derived from the virtual skeleton. An attribute in space-time can be calculated for the same joint by calculating a plurality (e.g., three) of key frames from a buffer of virtual skeleton data accumulated over time. Key frames may be independent of time and only need be different enough from one another by a predetermined error metric. Any of these attributes over key frames may serve as features.
Bone length and bone length differences over time are yet other example features derived from the virtual skeleton. A bone length can be calculated as the length between two different joints of the virtual skeleton. For each bone between two joints in the virtual skeleton, the change in bone length over time can be calculated. Any of these bone lengths and/or bone length differences may serve as features.
Zero pixel density is an example feature derived from the observation information used to derive the virtual skeleton (i.e., depth map). The zero pixel density refers to the number of pixels that are invalid in the depth map. The zero pixel density for the entire depth map and/or any particular region of the depth map may serve as a feature.
Length of feature circumference is another example feature derived from the depth map. The length of feature circumference refers to the circumference of an area that is needed to fit a body part imaged by the depth image. For example, a hand joint may be modeled by the virtual skeleton and the depth map may be analyzed at the position corresponding to that hand joint. By inferring that the forward most pixels at that location are imaging a human subject's hand, a circle can be constructed in which all such pixels can fit. The size of this circle may indicate if a hand is open or closed, for example. Similar to the length of feature circumference, the area, mean of area, and/or variance of area, from the depth map may be used. The length of feature circumference, area, mean of area, and/or variance of area may serve as features.
A voxel representation of an aspect of a depth image is another example feature derived from the depth map. For example, a voxel representation of the hand including clipping the wrist voxels and floodfill-climbing hand voxels to exclude other body parts may be used. First and second moments of the voxel representation (e.g., eccentricity and moment of inertia) may be used, as may a histogram of distances from a centroid to the voxels, mean and variance of the histogram, difference between buckets, and the absolute value of a bucket. A projection of voxels onto a two dimensional grid which has a binary feature (occupied/not occupied) per cell may also be used.
A contour of a body part image may be used as a feature. For example, a contour of a hand image in camera space can be built, and the following may serve as features for determining if the hand is open or closed: the number of peaks in the contour, amount of deviation from the mean and/or median, extents of the changes between peaks and valleys of the contour, smoothness of the contour, whether or not the contour is symmetric.
Aspects of an edge detection histogram derived from the depth map and/or from a color image may also serve as features.
These and other aspects of the virtual skeleton, depth map, color image, audio, application context, or other aspects of observation information may be analyzed by a trained gesture detection module to determine if a human subject intends to complete a particular gesture.
Via the process of machine learning, the gesture detection module 44 is trained to receive new sets of observation information 49 in real-time as a human subject performs gestures (e.g., to control an application, as shown in
As shown in
As introduced above, a gesture detection module may utilize virtually any machine learning algorithm without departing from the scope of this disclosure. The Adaboost boosting algorithm is a non-limiting example of one such algorithm. The following is a pseudo code representation of the Adaboost boosting algorithm:
Input
For t=1, 2, . . . T
Example formula:
Output strong classifier H
Example formula:
H(x)=sign(Σt=1Tαtht(x))
As another example, an Rboost algorithm may be used. In some cases, using RBoost with AdaBoost may result in a more compact representation that improves real time performance and reduces storage requirements. In particular, the regularized loss minimization problem may be solved:
L({right arrow over (a)})=Σi=1exp(−yiΣj=1|H|αjhj(xi)) such that Σj=1|H||αj|≦R
The above described methodologies have primarily focused on offline machine learning techniques. It is to be understood that online machine learning may be used. As an example, cloud computing could be used to improve existing trained gestures with specific training data of a new subject.
In some embodiments, the above described methods and processes may be tied to a computing system including one or more computers. In particular, the methods and processes described herein may be implemented as a computer application, computer service, computer API, computer library, and/or other computer program product.
Computing system 56 includes a logic subsystem 58, a data-holding subsystem 60, and a sensor subsystem 62. Computing system 56 may optionally include a display subsystem 64, communication subsystem 66, and/or other components not shown in
Logic subsystem 58 may include one or more physical devices configured to execute one or more instructions. For example, the logic subsystem may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
The logic subsystem may include one or more processors that are configured to execute software instructions. Additionally or alternatively, the logic subsystem may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of the logic subsystem may be single core or multicore, and the programs executed thereon may be configured for parallel or distributed processing. The logic subsystem may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of the logic subsystem may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 60 may include one or more physical, non-transitory, devices configured to hold data and/or instructions executable by the logic subsystem to implement the herein described methods and processes. When such methods and processes are implemented, the state of data-holding subsystem 60 may be transformed (e.g., to hold different data).
Data-holding subsystem 60 may include removable media and/or built-in devices. Data-holding subsystem 60 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 60 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 58 and data-holding subsystem 60 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
It is to be appreciated that data-holding subsystem 60 includes one or more physical, non-transitory devices. In contrast, in some embodiments aspects of the instructions described herein may be propagated in a transitory fashion by a pure signal (e.g., an electromagnetic signal, an optical signal, etc.) that is not held by a physical device for at least a finite duration. Furthermore, data and/or other forms of information pertaining to the present disclosure may be propagated by a pure signal.
The terms “module,” “program,” and “engine” may be used to describe an aspect of computing system 56 that is implemented to perform one or more particular functions. In some cases, such a module, program, or engine may be instantiated via logic subsystem 58 executing instructions held by data-holding subsystem 60. It is to be understood that different modules, programs, and/or engines may be instantiated from the same application, service, code block, object, library, routine, API, function, etc. Likewise, the same module, program, and/or engine may be instantiated by different applications, services, code blocks, objects, routines, APIs, functions, etc. The terms “module,” “program,” and “engine” are meant to encompass individual or groups of executable files, data files, libraries, drivers, scripts, database records, etc.
Sensor subsystem 62 may include one or more sensors configured to sense one or more human subjects, as described above. For example, the sensor subsystem 62 may comprise one or more image sensors, motion sensors such as accelerometers, touch pads, touch screens, and/or any other suitable sensors. Therefore, sensor subsystem 62 may be configured to provide observation information to logic subsystem 58, for example. As described above, observation information such as image data, motion sensor data, and/or any other suitable sensor data may be used to perform such tasks as determining a particular gesture performed by the one or more human subjects.
In some embodiments, sensor subsystem 62 may include a depth camera 70 (e.g., depth camera 22A of
In other embodiments, depth camera 70 may be a structured light depth camera configured to project a structured infrared illumination comprising numerous, discrete features (e.g., lines or dots). Depth camera 70 may be configured to image the structured illumination reflected from a scene onto which the structured illumination is projected. Based on the spacings between adjacent features in the various regions of the imaged scene, a depth image of the scene may be constructed.
In other embodiments, depth camera 70 may be a time-of-flight camera configured to project a pulsed infrared illumination onto the scene. The depth camera may include two cameras configured to detect the pulsed illumination reflected from the scene. Both cameras may include an electronic shutter synchronized to the pulsed illumination, but the integration times for the cameras may differ, such that a pixel-resolved time-of-flight of the pulsed illumination, from the source to the scene and then to the cameras, is discernable from the relative amounts of light received in corresponding pixels of the two cameras.
In some embodiments, sensor subsystem 62 may include a visible light camera 72 (e.g., visible light camera 22B of
When included, display subsystem 64 may be used to present a visual representation of data held by data-holding subsystem 60. As the herein described methods and processes change the data held by the data-holding subsystem, and thus transform the state of the data-holding subsystem, the state of display subsystem 64 may likewise be transformed to visually represent changes in the underlying data. Display subsystem 64 may include one or more display devices utilizing virtually any type of technology. Such display devices may be combined with logic subsystem 58 and/or data-holding subsystem 60 in a shared enclosure, or such display devices may be peripheral display devices.
When included, communication subsystem 66 may be configured to communicatively couple computing system 56 with one or more other computing devices. Communication subsystem 66 may include wired and/or wireless communication devices compatible with one or more different communication protocols. As non-limiting examples, the communication subsystem may be configured for communication via a wireless telephone network, a wireless local area network, a wired local area network, a wireless wide area network, a wired wide area network, etc. In some embodiments, the communication subsystem may allow computing system 56 to send and/or receive messages to and/or from other devices via a network such as the Internet.
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.