Benefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign application Serial No. 3420/CHE/2010 entitled “HAND POSE RECOGNITION”, by Hewlett Packard Development Company L.P., filed on 15 Nov. 2010, in INDIA which is herein incorporated in its entirety by reference for all purposes.
Captured images of hand poses or gestures are sometimes used as input to a computerized device. Existing methods and systems using such captured images either utilize large libraries of samples which are difficult to create or update with new hand poses or suffer from inaccuracy.
Hand pose recognition system 20 comprises display 24, sensor 26, processor 28 and memory 30. Display 24 comprises a device in communication with processor 38 that is configured to present to a person visual information including one or more selections or prompts, wherein a person may use hand poses, such as the example hand pose 22 shown, to make a selection or respond to the prompt. Display 24 may have a variety of sizes, shapes and configurations. In one embodiment, display 24 may comprise a computer screen or monitor. In some embodiments, display 24 may be omitted.
Sensor 26 comprises one or more sensors or cameras configured to capture image frames of the presented or input hand pose 22 from a distance. In one embodiment, sensor 26 comprises a camera which provides both an RGB image of the hand pose and an inverse depth map (closest is brightest) for every image frame. The resolution of the depth image (320×240) is eight bits and is generated in real-time at 30 frames per second. The RGB or color image is converted to a grayscale image. In other embodiments, the color image may be utilized. In yet other embodiments, other sensors or other cameras may be utilized.
As shown by
Processor 28, following instructions contained in memory 30, generates control signals directing the operation of display 24 and sensor 26. Processor 28 receives hand pose images from sensor 26 and recognizes the input hand pose following instructions contained in memory 30. In response to the identified input hand pose, processor 28, following response instructions contained in memory 30, generates one or more in signals to carry out one or more operations or functions. For example, in response to an identified hand pose, processor 28 may retrieve data from memory 30, from an external database or other source via a network or other data source and generate control signals causing the retrieved information or data to be displayed by display 24. In response to identified hand pose, processor 28 may generate control signals causing an electronic file to be altered and causing the altered file to be displayed on display 24 or to be stored in memory 30. In response to an identified hand pose, processor 28 may to perform one or more computations which result in an altered display 24 or altered file in memory 30. In yet other embodiments, in response to an identified hand pose, processor 28 may generate control signals for controlling devices 31 external to system 20, examples of which include, but are not limited to, printers or machines.
Memory 30 comprises one or more persistent storage devices storing instructions for processor 20 as well as storing other information or data for use by processor 28. Memory 30 may be directly connected to processor 28 in a wired or wireless fashion or may be in communication with processor 28 via a network. Memory 30 stores a hand pose library 32. Memory 32 further stores computer or processor readable language, code or instructions (schematically illustrated as hand pose input 34, hand pose classifier 36 and hand pose adder 38) to be carried out by processor 28 for performing hand pose recognition as well as adding custom hand poses. Hand library 32 comprises a database of various hand poses that are recognizable by system 21. Hand pose library 32 stores data, such as feature vectors, associated with each recognizable hand pose or hand pose class, wherein the hand pose classifier 36 compares an input hand pose 22 with possible hand poses (also referred to as hand pose classes) in the hand pose library 32 to identify and recognize the input hand pose 22.
Hand pose input 34 comprises computer readable language or code stored memory 30 for controlling the detection and segmentation of an input hand pose 22 by processor 28.
In hand segmentation step 56, processor 28 segments or separates that portion of the captured image frame corresponding to the hand from other elements or background. In the example described in which sensor 26 includes both depth and color/grayscale values, both depth and grayscale values are used to separate or distinguish the hand from surrounding objects. In other embodiments, such segmentation may be performed in other manners. As indicated by step 58, once hand segmentation has been completed by processor 28, processor 20 proceeds with classification or recognition of the input hand pose 22 under the direction of hand pose classifier 36
Hand pose classifier 36 comprises computer readable language or code stored in memory 30 and configured to direct processor 28 in the recognition or identification of an input hand pose 22.
As indicated by arrows 67, processor 68 carries out multiple operations and a plurality of iterations based upon current hand pose estimates and a computed residue between the hand pose estimate and the input hand pose. During the first iteration, the initial hand pose estimate serves as the current hand pose estimate. Thereafter, the hand pose estimate computed from the previous hand pose estimate and the residue of the previous hand pose serve as the current hand pose estimate. As indicated by step 66, once processor 28 determines the initial hand pose estimate, processor 28, under the direction of hand pose classifier 36, identifies a residue for the current hand pose estimate. For purposes of this disclosure, the term “residue” refers to quantified differences between the sensed values of the actual input hand pose and stored corresponding values in the hand pose library for the possible hand pose or hand pose class constituting the current hand pose estimate.
As indicated by step 68, processor 28 uses the residue for the current hand pose estimate to determine a new hand pose estimate. As indicated by step 70, processor 68 determines whether there is a convergence between the new hand pose estimate and the previous hand pose estimate. In some embodiments, processor 28 may determine whether a convergence occurs amongst more than two hand pose estimates. If an insufficient convergence is found, processor 28 goes on to compute another iteration, wherein processor 28 identifies the residue of the most recent hand pose estimate with respect to the input hand pose and determines yet another new hand pose estimate using the most recently determined residue. The iterations are continued until processor 28 finds a sufficient convergence amongst the hand pose estimates or until the number of iterations reach a predefined limit or cap as indicated by step 72.
As indicated by step 74, once processor 28 determines that the hand pose estimates or hand pose class estimates from the iterations sufficiently converge towards a single possible hand pose found in library 32 or once the number of iterations has reached a predefined limit, processor 28 outputs the image frame hand pose estimate. The output image frame hand pose estimate scores may then be compared to additional thresholds to determine (1) whether the input hand pose 22 is satisfactorily proximate to the output image frame hand pose estimate such that the image frame hand pose estimate may serve as a reliable prediction or estimate for the input hand pose depicted in the captured image frame or (2) whether the input hand pose 22 captured in the image frame is an outlier, not sufficiently corresponding to one of the predefined possible hand poses is contained in hand pose library 32.
Note that the proposed algorithm can be easily adapted to cover cases where there are multiple (say p) observations of the hand pose (say, using multiple cameras); or multiple samples of a given test pose (for example, multiple image frames).
As shown by
According to one embodiment, multiple samples of each possible hand pose are captured by sensor 26 and transformed to feature vectors 200 of library 132. The number of samples for each pose or pose class in library132 is reduced using techniques such as clustering, singular value decomposition (SVD) and the like. For example, in one embodiment, processor 26 may use K-means clustering, wherein each cluster can be stated as a variation of a pose. Processor 28 may then select a representative sample from each cluster for each pose to reduce the size of library 132. In another embodiment, library 132 may be formed by using feature vectors from other sources.
In a fashion similar to the creation of library 132, system 20 permits a user to add customized or personalized hand poses and associate commands with such hand poses.
As shown by step 162 of
As indicated by step 164, processor 28 identifies candidate feature vectors from library 132.
As indicated by step 166, processor 28 computes a score for each hand pose having feature vectors amongst the candidate feature vectors 224. The hand pose P1, P2 having the highest score is identified as the initial hand pose estimate. In the example illustrated, the initial hand pose estimate is pose P2 including feature vectors 222-5, 222-6 and 222-7. According to one embodiment, processor 28 (shown in
As indicated by steps 170 and 172 in
As indicated by step 164, after computing the residue vector, processor 28 once again identifies candidate library feature vectors for the next hand pose estimate. However, unlike the candidate library feature vectors utilized to determine the initial hand pose estimate prior to the completion of any iterations, the candidate library feature vectors after the first iteration are determined in part using the residue vector from the previous hand pose estimate. In particular, processor 28 computes a dot product on the residue vector from the previous hand pose estimate and each feature vector of library 132. An example of this step is shown in
As shown by
Referring back to method 160 in
If processor 28 determines there not been sufficient convergence of the iteration hand pose estimate according to the stopping criteria, processor 28 then determines whether the iteration limit has been met per step 172. If the limitation is not been met, processor 28 continues on with yet another iteration as described above. If the iteration limit T has been reached, processor 28 proceeds to settle upon and output the image frame hand pose estimate.
As indicated by step 174 in
Although the present disclosure has been described with reference to example embodiments, workers skilled in the art will recognize that changes may be made in form and detail without departing from the spirit and scope of the claimed subject matter. For example, although different example embodiments may have been described as including one or more features providing one or more benefits, it is contemplated that the described features may be interchanged with one another or alternatively be combined with one another in the described example embodiments or in other alternative embodiments. Because the technology of the present disclosure is relatively complex, not all changes in the technology are foreseeable. The present disclosure described with reference to the example embodiments and set forth in the following claims is manifestly intended to be as broad as possible. For example, unless specifically otherwise noted, the claims reciting a single particular element also encompass a plurality of such particular elements.
Number | Date | Country | Kind |
---|---|---|---|
3420/CHE/2010 | Nov 2010 | IN | national |
Number | Name | Date | Kind |
---|---|---|---|
4760397 | Piccolruaz | Jul 1988 | A |
5581276 | Cipolla et al. | Dec 1996 | A |
6215890 | Matsuo et al. | Apr 2001 | B1 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6819782 | Imagawa et al. | Nov 2004 | B1 |
7289645 | Yamamoto et al. | Oct 2007 | B2 |
7565295 | Hernandez-Rebollar | Jul 2009 | B1 |
20020013675 | Knoll et al. | Jan 2002 | A1 |
20060098845 | Sotriropoulos et al. | May 2006 | A1 |
20060251298 | Bronstein et al. | Nov 2006 | A1 |
20070091292 | Cho et al. | Apr 2007 | A1 |
20080152218 | Okada | Jun 2008 | A1 |
20080219502 | Shamaie | Sep 2008 | A1 |
20080244465 | Kongqiao et al. | Oct 2008 | A1 |
20090064054 | Ishigaki et al. | Mar 2009 | A1 |
20090087028 | Lacey et al. | Apr 2009 | A1 |
20090110292 | Fujimura et al. | Apr 2009 | A1 |
20090271004 | Zecchin et al. | Oct 2009 | A1 |
20090278915 | Kramer et al. | Nov 2009 | A1 |
20090324008 | Kongqiao et al. | Dec 2009 | A1 |
20100090946 | Underkoffler et al. | Apr 2010 | A1 |
20100114517 | Boeve et al. | May 2010 | A1 |
20100117963 | Westerman et al. | May 2010 | A1 |
20100329509 | Fahn et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
2004004202 | May 2004 | KR |
WO2009134482 | Nov 2009 | WO |
Entry |
---|
Smith et al.,Efficient Hand Gesture Rendering and Decoding Using a Simple Gesture Library,State University of New York at Binghamton, entire document. |
Dias et al., OGRE—Open Gestures Recognition Engine,ADETTi/ISCTE, Lisboa, Portugal, Issue Date : Oct. 17-20, 2004,pp. 33-40. |
Zhe Yang et al., Accelerometer Based Hand Action Recognition, Entire document. |
Vassillis Athitsos,Database Indexing Methods for 3D Hand Pose Estimation, Entire document. |
Pose Libraries, entire document, http://www.blender.org/development/releaselogs/blender-246/pose-libraries/. |
P. Suryanarayan et al., Dynamic Hand Pose Recognition using Depth Data, 2010 International Conference on Pattern Recognition, Entire document. |
W. T. Freeman et al.,Television control by hand gestures, Mitsubishi Electric Research Laboratories, Tech. Rep. TR94-24, 1994, entire document. |
R. Block, Toshiba Qosmio G55 features spursengine, visual gesture controls,http://www.engadget.com/2008/06/14/ toshiba-qosmio-g55-features-spursengine-visual-gesturecontrols/, Entire document. |
S. Mitra at al., Gesture recognition: A survey, , IEEE Tran. Systems, Man, and Cybernetics, Part C: Applications and Reviews, vol. 37, No. 3, pp. 311-324, May 2007. |
G. Yahav et al., 3d imaging camera for gaming application, Proc. of the Intl. Conf. on Consumer Electronics, ICCE 2007, Las Vegas, NV, Jan. 2007, pp. 1-2. |
Project Natal, http://www.xbox.com/en-US/live/projectnatal/http://www.xbox.com/EN-in/kinect, Entire document. |
J. A. Tropp et al., Signal recovery from random measurements via Orthogonal Matching Pursuit, IEEE Trans. Info. Theory, vol. 53, No. 12, pp. 4655-4666, Dec. 2007. |
D. Needell et al.,CoSaMP: Iterative signal recovery from incomplete and inaccurate samples, ACM Report Jan. 2008, Mar. 2008. Revised, Jul. 2008. |
W. Dai et al.,Subspace Pursuit for Compressive Sensing Signal Reconstruction, IEEE Trans. Info. Theory, vol. 55, No. 5, pp. 2230-2249, May 2009. |
Chan Wah Ng et al., Gesture Recognition via Pose Classification, Department of Elctrical Engineering,National University of Singapore p. 3703, Publication Date :2000. |
Number | Date | Country | |
---|---|---|---|
20120119984 A1 | May 2012 | US |