The present invention relates generally to interface systems, and specifically to compound gesture recognition.
As the range of activities accomplished with a computer increases, new and innovative ways to provide an interface with a computer are often developed to complement the changes in computer functionality and packaging. For example, touch sensitive screens can allow a user to provide inputs to a computer without a mouse and/or a keyboard, such that desk area is not needed to operate the computer. Examples of touch sensitive screens include pressure sensitive membranes, beam break techniques with circumferential light sources and sensors, and acoustic ranging techniques. However, these types of computer interfaces can only provide information to the computer regarding the touch event, itself, and thus can be limited in application. In addition, such types of interfaces can be limited in the number of touch events that can be handled over a given amount of time, and can be prone to interpret unintended contacts, such as from a shirt cuff or palm, as touch events. Furthermore, touch sensitive screens can be prohibitively expensive and impractical for very large display sizes, such as those used for presentations.
One embodiment of the invention includes a method for executing and interpreting gesture inputs in a gesture recognition interface system. The method includes detecting and translating a first sub-gesture into a first device input that defines a given reference associated with a portion of displayed visual content. The method also includes detecting and translating a second sub-gesture into a second device input that defines an execution command for the portion of the displayed visual content to which the given reference refers.
Another embodiment of the invention includes a method for executing and interpreting gesture inputs in a gesture recognition interface system. The method includes obtaining a plurality of sequential images of a gesture input environment and detecting a first sub-gesture based on a three-dimensional location of at least one feature of a first input object relative to displayed visual content in each of the plurality of sequential images of the gesture input environment. The method also includes translating the first sub-gesture into a first device input that defines a given reference associated with a portion of the displayed visual content. The method also includes detecting a second sub-gesture based on changes in the three-dimensional location of at least one feature of at least one of the first input object and a second input object in each of the plurality of sequential images of the gesture input environment. The method further includes translating the second sub-gesture into a second device input that defines an execution command for the portion of the displayed visual content to which the given reference refers.
Another embodiment of the invention includes a gesture recognition system. The system comprises means for displaying visual content and means for obtaining a plurality of sequential images of a gesture input environment that is associated with the visual content. The system also comprises means for determining compound gesture inputs associated with at least one input object based on three-dimensional locations of at least one feature of the at least one input object in each of the plurality of sequential images of the gesture input environment. The system further comprises means for translating the compound gesture inputs into a first device input and a second device input. The first device input can be configured to reference a portion of the visual content and the second device input can be configured to execute a command associated with the portion of the visual content to which the first device input refers in at least one of the buffered plurality of sequential images.
The present invention relates generally to interface systems, and specifically to compound gesture recognition. A user employs an input object to provide simulated inputs to a computer or other electronic device. It is to be understood that the simulated inputs can be provided by compound gestures using the input object. For example, the user could provide gestures that include pre-defined motion using the input object in a gesture recognition environment, such as defined by a foreground of a display screen that displays visual content. The input object could be, for example, one or both of the user's hands; a wand, stylus, pointing stick; or a variety of other devices with which the user can gesture. The simulated inputs could be, for example, simulated mouse inputs, such as to establish a reference to the displayed visual content and to execute a command on portions of the visual content with which the reference refers. Thus, a compound gesture can be a gesture with which multiple sub-gestures can be employed to provide multiple related device inputs. For example, a first sub-gesture can be a reference gesture to refer to a portion of the visual content and a second sub-gesture can be an execution gesture that can be performed concurrently with or immediately sequential to the first sub-gesture, such as to execute a command on the portion of the visual content to which the first sub-gesture refers.
Any of a variety of gesture recognition interface systems can be implemented to recognize the compound gestures. As an example, one or more infrared (IR) light sources can illuminate a gesture recognition environment that is defined by the area of physical space in a foreground of a vertical or horizontal display surface. A set of stereo cameras can each generate a plurality of images of the input object. The plurality of images can be, for example, based on a reflected light contrast of the IR light reflected back from the input object relative to substantially non-reflected light or more highly reflected light from a retroreflective background surface. The plurality of images of the input object from each camera could be, for example, a plurality of matched sets of images of the input object, such that each image in the matched set of images corresponds to the input object from a different perspective at substantially the same time. A given matched set of images can be employed to determine a location of the input object and the plurality of matched sets of images can be employed to determine physical motion of the input object.
A controller can be configured to receive the plurality of images to determine three-dimensional location information associated with the input object. For example, the controller could apply an algorithm to determine features of the input object, such as endpoints, length, and pitch of elongated portions of the input object in three-dimensional space. The controller could then translate the simulated inputs into device inputs based on the three-dimensional location information. For example, the controller could interpret gesture inputs based on motion associated with the input object and translate the gesture inputs into inputs to a computer or other device. The controller could also compare the motion associated with the one or more endpoints of the input object with a plurality of pre-defined gestures stored in a memory, such that a match with a given pre-defined gesture could correspond with a particular device input.
An input object 24 can provide simulated inputs over the vertical display surface 20. In the example of
In the example of
In the example of
The first camera 12 and the second camera 14 can each provide their respective separate images of the input object 24 to a controller 26. The controller 26 could reside, for example, within a computer (not shown) for which the gesture recognition interface system 10 is designed to provide a gesture recognition interface. It is to be understood, however, that the hosting of a controller is not limited to a standalone computer, but could be included in embedded processors. The controller 26 can process the respective images associated with the input object 24 to generate three-dimensional location data associated with the input object 24.
For example, each of the first camera 12 and the second camera 14 could each be mounted at pre-determined angles relative to the floor 28 beneath the vertical display surface 20. For a given matched pair of images of the input object 24, if the pre-determined angles of each of the cameras 12 and 14 are equal, then each point of the input object 24 in two-dimensional space in a given image from the camera 12 is equidistant from a corresponding point of the input object 24 in the respective matched image from the camera 14. As such, the controller 26 could determine the three-dimensional physical location of the input object 24 based on a relative parallax separation of the matched set of images of the input object 24 at a given time. In addition, using a computer algorithm, the controller 26 could also determine the three-dimensional physical location of features associated with portions of the input object 24, such as fingers and fingertips. As an example, the controller 26 can be configured to determine and interpret the gestures that are provided in the gesture recognition environment in any of a variety of ways, such as those described in either of U.S. patent applications entitled “Gesture Recognition Interface System”, Ser. No. 11/485,788, filed Jul. 13, 2006, and “Gesture Recognition Interface System with Vertical Display”, Ser. No. 12/133,836, filed Jun. 5, 2008, each assigned to the same assignee as the Present Application and incorporated herein by reference in its entirety.
The gesture recognition interface system 10 can also include a projector 30. The projector 30 can provide visual content with which the user can interact and provide inputs. In the example of
As an example, the controller 26 can determine compound gestures that are performed by a user using the input object 24 and can translate the compound gestures into simulated mouse inputs. For example, the controller 26 could interpret pointing at the vertical display surface 20 by the input object 24, such as with an extended index finger, to establish a reference 32 on the visual content that is displayed on the vertical display surface 20. In the example of
The establishment of the reference 32 can be a first of multiple sub-gestures of a compound gesture. Specifically, an additional sub-gesture can be implemented using the input object 24, or an additional input object such as the user's other hand, to perform an execution gesture that can be translated as an execution command to interact with a portion of the visual content with which the reference 32 refers, such as based on a visual overlapping. The portion of the visual content with which the reference 32 overlaps could be an active portion, such as could provide interaction in response to execution commands. Therefore, the controller 26 can interpret the additional sub-gesture of the compound gesture as a left mouse-click, a right mouse-click, a double mouse-click, or a click-and-hold. Accordingly, a user of the gesture recognition interface system 10 could navigate through a number of computer menus, graphical user interface (GUI) icons, and/or execute programs associated with a computer merely by moving his or her fingertip through the air in the gesture recognition environment 22 and initiating one or more complementary gestures without touching a mouse or the vertical display surface 20.
The first portion 52 of the diagram 50 demonstrates a user's hand 58 performing a first sub-gesture, such that the user's hand 58 is implemented as an input object in the associated gesture recognition interface system. The first sub-gesture is demonstrated in the example of
The second portion 54 of the diagram 50 demonstrates that, upon the reference 64 referring to OBJECT 3, the user performs a second sub-gesture of the compound gesture with the hand 58 by extending the thumb of the hand 58. The second sub-gesture that is performed by extending the thumb of the hand 58 can thus be an execution gesture. Therefore, in the second portion 54 of the diagram 50, the extension of the thumb could be translated by the associated controller as a “click-and-hold” command, such as to simulate a click-and-hold of a left mouse button. Accordingly, in the second portion 54 of the diagram 50, OBJECT 3 is selected for interaction by the user merely by the extension of the thumb.
The third portion 56 of the diagram 50 demonstrates the interaction of OBJECT 3 based on the user implementing the first gesture of the compound gesture. Specifically, as demonstrated in the example of
The example of
The compound gesture that is demonstrated in the example of
In the example of
In addition to translating the gestures into device inputs based on the sequential images stored in the image buffer 36, the controller 26 can also access the sequential images that are stored in the image buffer 36 to identify a portion of the visual content to which a reference gesture was referring prior to the performance of a subsequently performed execution gesture. As an example, the controller 26 can monitor an amount of time that a reference gesture refers to a given portion of the visual content and/or an amount of time between the termination of a reference gesture and the performance of an execution gesture. Accordingly, the controller 26 can associate the execution gesture with the reference gesture based on one or timing thresholds, such that the controller 26 can access previous images in the sequential images stored in the image buffer 36 to perform the corresponding execution command on the appropriate portion of the visual content.
The first portion 102 of the diagram 100 demonstrates a user's hand 108 performing a first sub-gesture, such that the user's hand 108 is implemented as an input object in the associated gesture recognition interface system. The first sub-gesture is demonstrated in the example of
The second portion 104 of the diagram 100 demonstrates that, upon the reference 114 referring to OBJECT 3, the user performs a second sub-gesture of the compound gesture with the hand 108 by snapping the fingers of the hand 108. The second sub-gesture that is performed by snapping the fingers of the hand 108 can thus be an execution gesture. Therefore, in the second portion 104 of the diagram 100, the snapping of the fingers could be translated by the associated controller as an execution command, such as to simulate a double click of a left mouse button.
As demonstrated in the example of
The third portion 106 of the diagram 100 demonstrates the effect of the execution command that is performed on OBJECT 3. Specifically, as described above, OBJECT 3 is configured as a desktop folder. Therefore, the effect of a simulated double left mouse-click is to open the desktop folder, demonstrated in the example of
The example of
Referring back to the example of
The diagram 150 includes a set of compound gestures that each involve the use of a user's hand 152 to perform the compound gestures. Each of the compound gestures demonstrated in the example of
A first compound gesture 158 is demonstrated in the diagram 150 as similar to the compound gesture demonstrated in the example of
A second compound gesture 160 is demonstrated in the diagram 150 as beginning with the reference gesture 154. However, the execution gesture 156 is demonstrated as the user maintaining the reference gesture 154 with the hand 152, except that the hand 152 is thrust forward and backward rapidly. Thus, the controller 26 can interpret the execution gesture 156 based on the rapid change forward and backward of the hand 152. In addition, a user can maintain the reference gesture 154 while performing the execution gesture 156, similar to the compound gesture described above in the example of
A third compound gesture 162 is demonstrated in the diagram 150 as beginning with the reference gesture 154. However, the execution gesture 156 is demonstrated as the user maintaining the extension of the index finger while rotating the index finger in a circle. As an example, the third compound gesture 162 can be configured to scroll through a document or list that is displayed on the vertical display surface 20, depending on the direction of rotation of the index finger. For example, the controller 26 could be configured to access the image buffer 36 to determine the document or list to which the reference gesture 154 referred prior to the execution gesture 156. As another example, the third compound gesture 162 could be combined with another gesture, such that the list or document could be selected with a different compound gesture prior to the execution gesture 156 of the third compound gesture 162.
A fourth compound gesture 164 is demonstrated in the diagram 150 as beginning with the reference gesture 154. However, the execution gesture 156 is demonstrated as the user forming a claw-grip with the thumb and all fingers. As an example, the fourth compound gesture 164 could be implemented to select a portion of the visual content for movement or for manipulation. It is to be understood that the fourth compound gesture 164 could include a subset of all of the fingers formed as a claw-grip, or each different amount or set of fingers could correspond to a different execution command. In addition, the claw-grip need not be implemented with the fingers and/or thumb touching, but could just include the fingers and/or thumb being slightly extended and bent.
A fifth compound gesture 166 is demonstrated in the diagram 150 as beginning with the reference gesture 154. However, the execution gesture 156 is demonstrated as the user forming an open palm. A sixth compound gesture 168 is demonstrated in the diagram 150 as beginning with the reference gesture 154, with the execution gesture 156 being demonstrated as the user forming a closed fist. As an example, the fifth compound gesture 166 and/or the sixth compound gesture 168 could be implemented to select a portion of the visual content for movement or for manipulation. In addition, for example, either of the fifth compound gesture 166 and the sixth compound gesture 168 could include motion of the thumb to incorporate a different execution gesture.
The diagram 150 in the example of
The diagram 200 includes a first compound gesture 202, a second compound gesture 203, a third compound gesture 204, and a fourth compound gesture 205 that all involve the use of a user's hand 206 to perform the compound gestures. Each of the compound gestures demonstrated in the example of
The first compound gesture 202 is demonstrated in the diagram 150 as similar to the fourth compound gesture 164 demonstrated in the example of
The second compound gesture 203 is demonstrated in the diagram 200 as similar to the fifth compound gesture 166 demonstrated in the example of
The third compound gesture 204 is demonstrated in the diagram 150 as similar to the first compound gesture 168 demonstrated in the example of
The fourth compound gesture 205 is demonstrated in the diagram 200 as similar to the first compound gesture 168 demonstrated in the example of
The diagram 200 in the example of
The two-handed compound gesture 250 demonstrated in the example of
The example of
The diagram 300 includes a set of compound gestures that each involve the use of a user's left hand 302 and right hand 304 to perform the compound gestures. Each of the compound gestures demonstrated in the example of
A first compound gesture 310 is demonstrated in the diagram 300 as similar to the compound gesture 168 demonstrated in the example of
A second compound gesture 314 is demonstrated in the diagram 300 as similar to the compound gestures 164 and 202 demonstrated in the examples of
A third compound gesture 318 is demonstrated in the diagram 300 as similar to the compound gestures 166 and 203 demonstrated in the examples of
It is to be understood that the diagram 300 is not intended to be limiting as to the two-handed compound gestures that are capable of being performed in the gesture recognition interface system 10. As an example, the two-handed compound gestures are not limited to implementation of the extended fingers and thumb of the ready position 308 of the right hand, but that a different arrangement of fingers and the thumb could instead by implemented. As another example, it is to be understood that the two-handed compound gestures in the diagram 300 can be combined with any of a variety of other gestures, such as the single-handed compound gestures in the examples of
The gesture recognition interface system 400 includes a first camera 402 and a second camera 404. Coupled to each of the first camera 402 and the second camera 404, respectively, is a first IR light source 406 and a second IR light source 408. The first camera 402 and the second camera 404 may each include an IR filter, such that the respective camera may pass IR light and substantially filter other light spectrums. The first IR light source 406 and the second IR light source 408 each illuminate a background surface 410 which can be retroreflective. As such, IR light from the first IR light source 406 can be reflected substantially directly back to the first camera 402 and IR light from the second IR light source 408 can be reflected substantially directly back to the second camera 404. Accordingly, an object that is placed above the background surface 410 may reflect a significantly lesser amount of IR light back to each of the first camera 402 and the second camera 404, respectively. Therefore, such an object can appear to each of the first camera 402 and the second camera 404 as a silhouette image, such that it can appear as a substantially darker object in the foreground of a highly illuminated background surface 410. It is to be understood that the background surface 410 may not be completely retroreflective, but may include a Lambertian factor to facilitate viewing by users at various angles relative to the background surface 410.
An input object 412 can provide simulated inputs over the background surface 410. In the example of
In the example of
The first camera 402 and the second camera 404 can each provide their respective separate silhouette images of the input object 412 to a controller 414. The controller 414 could reside, for example, within a computer (not shown) for which the gesture recognition interface system 400 is designed to provide a gesture recognition interface. It is to be understood, however, that the hosting of a controller is not limited to a standalone computer, but could be included in embedded processors. The controller 414 can process the respective silhouette images associated with the input object 412 to generate three-dimensional location data associated with the input object 412.
For example, each of the first camera 402 and the second camera 404 could be mounted at a pre-determined angle relative to the background surface 410. For a given matched pair of images of the input object 412, if the predetermined angle of each of the cameras 402 and 404 is equal, then each point of the input object 412 in two-dimensional space in a given image from the camera 402 is equidistant from a corresponding point of the input object 412 in the respective matched image from the camera 404. As such, the controller 414 could determine the three-dimensional physical location of the input object 412 based on a relative parallax separation of the matched pair of images of the input object 412 at a given time. In addition, using a computer algorithm, the controller 414 could also determine the three-dimensional physical location of at least one end-point, such as a fingertip, associated with the input object 412.
The gesture recognition interface system 400 can also include a projector 416 configured to project image data. The projector 416 can provide an output interface, such as, for example, computer monitor data, for which the user can interact and provide inputs using the input object 412. In the example of
It is to be understood that the gesture recognition interface system 400 is not intended to be limited to the example of
The gesture recognition interface system 450 includes a three-dimensional display system 458, demonstrated in the example of
An input object 464, demonstrated as a user's hand in the example of
As an example, a user of the gesture recognition interface system 450 could perform a reference gesture with the input object 464 to refer to one of the functional components 462, demonstrated in the example of
The gesture recognition interface system 450 is demonstrated as yet another example of the use of compound gestures in providing device inputs to a computer. It is to be understood that the gesture recognition interface system 450 is not intended to be limited to the example of
In view of the foregoing structural and functional features described above, a methodology in accordance with various aspects of the present invention will be better appreciated with reference to
At 506, a first gesture input is determined based on a three-dimensional location of at least one feature of a first input object relative to displayed visual content in each of the plurality of sequential images of the gesture input environment. The first gesture input can be a portion of a compound gesture, such that it is a reference gesture. The gesture can be determined based on an IR brightness contrast as perceived by a controller in each of the sequential images. The three-dimensional location can be based on parallax separation of the features in each of the concurrent images in the sequence. At 508, the first gesture is translated into a first device input to the computer, the first device input being configured to refer to a portion of the visual content. The reference to the portion of the visual content can be based on establishing a reference, such as a mouse pointer, on the visual content in response to the first gesture. Thus the first gesture input could be a pointed index finger to simulate a mouse cursor.
At 510, a second gesture input is determined based on changes in the three-dimensional location of at least one feature of at least one of the first input object and a second input object in each of the plurality of sequential images of the gesture input environment, the second gesture being different than the first gesture. The second gesture input can be a portion of a compound gesture, such that it is an execution gesture. The second gesture input could be performed with the same hand as the first gesture input, the other hand, or with both hands. At 512, the second gesture is translated into a second device input to the computer, the second device input being configured to execute a command associated with the portion of the visual content to which the first device input refers in at least one of the buffered plurality of sequential images. The executed command can be any of a variety of commands that manipulate the portion of the visual content to which the first gesture input refers, such as left, right, or scrolling mouse commands, and/or such as single-click, double-click, or click-and-hold commands.
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4468694 | Edgar | Aug 1984 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4924506 | Crossley et al. | May 1990 | A |
5220441 | Gerstenberger | Jun 1993 | A |
5239373 | Tang et al. | Aug 1993 | A |
5475422 | Mori et al. | Dec 1995 | A |
5483261 | Yasutake | Jan 1996 | A |
5528263 | Platzker et al. | Jun 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5594469 | Freeman et al. | Jan 1997 | A |
5913727 | Ahdoot | Jun 1999 | A |
5999185 | Kato et al. | Dec 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6128003 | Smith et al. | Oct 2000 | A |
6144366 | Numazaki et al. | Nov 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6160899 | Lee et al. | Dec 2000 | A |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6222465 | Kumar et al. | Apr 2001 | B1 |
6327381 | Rogina et al. | Dec 2001 | B1 |
6353428 | Maggioni et al. | Mar 2002 | B1 |
6359612 | Peter et al. | Mar 2002 | B1 |
6434255 | Harakawa | Aug 2002 | B1 |
6512507 | Furihata et al. | Jan 2003 | B1 |
6624833 | Kumar et al. | Sep 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6695770 | Choy et al. | Feb 2004 | B1 |
6714901 | Cotin et al. | Mar 2004 | B1 |
6720949 | Pryor et al. | Apr 2004 | B1 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6796656 | Dadourian | Sep 2004 | B1 |
6806849 | Sullivan | Oct 2004 | B2 |
6857746 | Dyner | Feb 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6956573 | Bergen et al. | Oct 2005 | B1 |
6983065 | Akgul et al. | Jan 2006 | B1 |
7030861 | Westerman et al. | Apr 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7129927 | Mattsson | Oct 2006 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7643552 | Saishu et al. | Jan 2010 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
20010006426 | Son et al. | Jul 2001 | A1 |
20010043719 | Harakawa et al. | Nov 2001 | A1 |
20010044858 | Rekimoto | Nov 2001 | A1 |
20020090146 | Heger et al. | Jul 2002 | A1 |
20020093666 | Foote et al. | Jul 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020126161 | Kuzunuki et al. | Sep 2002 | A1 |
20020186221 | Bell | Dec 2002 | A1 |
20030058341 | Brodsky et al. | Mar 2003 | A1 |
20030067537 | Myers | Apr 2003 | A1 |
20030085866 | Bimber | May 2003 | A1 |
20030156756 | Gokturk et al. | Aug 2003 | A1 |
20030218761 | Tomasi et al. | Nov 2003 | A1 |
20040046747 | Bustamante | Mar 2004 | A1 |
20040108990 | Lieberman et al. | Jun 2004 | A1 |
20040113885 | Genc et al. | Jun 2004 | A1 |
20040125207 | Mittal et al. | Jul 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20040239761 | Jin et al. | Dec 2004 | A1 |
20050002074 | McPheters et al. | Jan 2005 | A1 |
20050012817 | Hampapur et al. | Jan 2005 | A1 |
20050052714 | Klug et al. | Mar 2005 | A1 |
20050068537 | Han et al. | Mar 2005 | A1 |
20050088714 | Kremen | Apr 2005 | A1 |
20050110964 | Bell et al. | May 2005 | A1 |
20050151850 | Ahn et al. | Jul 2005 | A1 |
20050166163 | Chang et al. | Jul 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20050275628 | Balakrishnan et al. | Dec 2005 | A1 |
20050285945 | Usui et al. | Dec 2005 | A1 |
20050286101 | Garner et al. | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060036944 | Wilson | Feb 2006 | A1 |
20060052953 | Vilanova et al. | Mar 2006 | A1 |
20060092178 | Tanguay | May 2006 | A1 |
20060125799 | Hillis et al. | Jun 2006 | A1 |
20060187196 | Underkoffler et al. | Aug 2006 | A1 |
20060203363 | Levy-Rosenthal | Sep 2006 | A1 |
20060209021 | Yoo et al. | Sep 2006 | A1 |
20070024590 | Krepec | Feb 2007 | A1 |
20070064092 | Sandbeg et al. | Mar 2007 | A1 |
20080013826 | Hillis et al. | Jan 2008 | A1 |
20080028325 | Ferren et al. | Jan 2008 | A1 |
20080043106 | Hassapis et al. | Feb 2008 | A1 |
20080107303 | Kim et al. | May 2008 | A1 |
20080141181 | Ishigaki et al. | Jun 2008 | A1 |
20080143975 | Dennard et al. | Jun 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080244465 | Kongqiao et al. | Oct 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20080300055 | Lutnick et al. | Dec 2008 | A1 |
20090015791 | Chang et al. | Jan 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090115721 | Aull et al. | May 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090172606 | Dunn et al. | Jul 2009 | A1 |
20090228841 | Hildreth | Sep 2009 | A1 |
20090316952 | Ferren et al. | Dec 2009 | A1 |
Number | Date | Country |
---|---|---|
197 39 285 | Nov 1998 | DE |
0 571 702 | Dec 1993 | EP |
0 571 702 | Dec 1993 | EP |
0 913 790 | May 1999 | EP |
1 223 537 | Dec 2001 | EP |
1 689 172 | Aug 2006 | EP |
1 879 129 | Jan 2008 | EP |
1 879 130 | Jan 2008 | EP |
2 056 185 | May 2009 | EP |
2 068 230 | Jun 2009 | EP |
2460937 | Dec 2009 | GB |
62264390 | Jan 1987 | JP |
4271423 | Feb 1991 | JP |
04031996 | Feb 1992 | JP |
WO 9813746 | Apr 1998 | WO |
WO 0002187 | Jan 2000 | WO |
WO 0021023 | Apr 2000 | WO |
WO 0055802 | Sep 2000 | WO |
WO 03026299 | Mar 2003 | WO |
WO 2008001202 | Jan 2008 | WO |
Entry |
---|
Search Report for corresponding GB 0913330.7; Completed Nov. 3, 2009 by Dr. Russell Maurice. |
Bretzner, et al.: “Hand Gesture Recognition Using Multi-Scale Colour Features, Hierarchical Models and Particle Filtering”; Automatic Face and Gesture Recognition, 2002, Proceedings. Fifth IEEE International Conference on, IEEE, Piscataway, NJ, USA, May 20, 2002, pp. 423-428, XP010949393, ISBN: 978-07695-1602-8, p. 2. |
British Search Report for corresponding GB 0909597.7 completed Sep. 17, 2009. |
British Search Report for corresponding GB0910067.8, completed Oct. 15, 2009. |
DE Office Action for corresponding DE 10 2009 043 798.3, issued Nov. 10, 2010. |
Dubois, et al.: “In Vivo Measurement of Surgical Gestures”; IEEE Transactions on Biochemical Engineering, vol. 49, No. 1, Jan. 2002, pp. 49-54. |
EP Search Report for corresponding EP 07 25 2716, completed Jun. 4, 2010, The Hague. |
EP Search Report for corresponding EP 07 25 2870 completed Aug. 16, 2010 by Suphi Umut Naci of the Hague. |
European Search Report for corresponding EP 07 25 2717 completed Sep. 27, 2007 by Martin Müller of the EPO. |
Fiorentino, et al.: “Spacedesign: A Mixed Reality Workspace for Aesthetic Industrial Design”; Mixed and Augmented Reality, 2002. ISMAR 2002. Proceedings. International Symposium on Sep. 30-Oct. 1, 2002, Piscataway, NJ, USA, IEEE, Sep. 30, 2002, pp. 86-318, XP010620945, ISBN: 0-7695-1781-1; Abstract, Figs. 1, 2; p. 86, left-hand col., ¶4; p. 87, left-hand col., ¶4-right-hand col. |
Hartley, et al.: “Multiple View Geometry in Computer Vision, Structure Computation”; Jul. 31, 2000, Multiple View Geometry in Computer Vision, Cambridge University Press, GB, pp. 295-311, XP002521742, ISBN: 9780521623049, pp. 295-311, figures 11.1, 11.2 & 11.7. |
Ishibuci, et al.: “Real Time Hand Gesture Recognition Using 3D Prediction Model”; Proceedings of the International Conference on Systems, Man and Cybernetics. Le Touquet, Oct. 17-20, 1993; New York, IEEE, US LNKD-DOI: 10.1109/ICSMC. 1993. 390870, vol. -, Oct. 17, 1993, pp. 324-328, XP010132504, ISBN: 978-0-7803-0911-1, pp. 325; figures 1-4. |
Kjeldsen, et al.: “Toward the Use of Gesture in Traditional User Interfaces”; Automatic Face and Gesture Recognition, 1996, Proceedings of the Second International Conference on Killington, VT, USA 14-1619961014 Los Alamitos, CA, USA, IEEE Comput. Soc., ISBN 978-0-8186-7713-7; whole document. |
Korida, K et al: “An Interactive 3D Interface for a Virtual Ceramic Art Work Environment”; Virtual Systems and Multimedia, 1997. VSMM '97. Proceedings., International Conference on Geneva, Switzerland Sep. 10-12, 1997, Los Alamitos, CA, USA, IEEE Comput. Soc, US, Sep. 10, 1997, pp. 227-234, XP010245649, ISBN: 0-8186-8150-0; Abstract, Figs. 1, 2, 5, 7-11. |
Leibe, et al.: “Toward Spontaneous Interaction with the Perceptive Workbench”; IEEE Computer Graphics and Applications; p. 54-65XP-000969594; Nov./Dec. 2000. |
Mitchell: “Virtual Mouse”; IP.COM Inc, West Henrietta, NY, US, May 1, 1992 ISSN 1533-0001; whole document. |
Office Action for corresponding DE 10 2009 025 236.3, issued May 2010. |
Pajares, et al.: “Usability Analysis of a Pointing Gesture Interface”; Systems, Man and Cybernetic, 2004 IEEE International Conference on , Oct. 10, 2004, ISBN 978-0-7803-8566-5; see e.g. sections 2.1 and 4. |
Pavlovic, et al.: “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”; Jul. 1, 1997, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, US LNKD-DOI: 10.1109/34.598226, pp. 677-695, XP000698168, ISSN: 0162-8828, pp. 14-16, figure 5. |
Plesniak, W et al.: “Spatial Interaction with Haptic Holograms”; Multimedia Computing and Systems, 1999, IEEE International Conference on Florence, Italy Jun. 7-11, 1999, Los Alamitos, CA, USA, IEEE Comput. Soc. US, vol. 1, Jun. 7, 1999, pp. 413-426, XP010342817 ISBN: 0-7695-0253-9; Abstract, Figs. 7, 8. |
Rehg: “visual Analysis of High DOF Articulated Objects with Application to Hand Tracking”; [online] 1995, XP002585209, School of Computer Science Carnegie Mellon University, Retrieved from the internet: URL: http//www.dtoc/,o;/cgi-bin/GetTRDoc?AD=ADA3066778,Location=U28&doc=GetRDoc.pdf> [retrieved on May 25, 2010], p. 1, 28, 31. |
Sato, Y et al.: “Real-Time Input of 3D Pose and Gestures of a Users Hand and Its Applications for HCI”; Proceedings IEEE 2001 virtual Reality. (VR). Yokohama, Japan, Mar. 13, 2001, pp. 79-86, XP010535487; ISBN: 0-7695-0948-7; Abstract, Figs. 3, 4, 6, 8. |
Search Report for corresponding British application No. GB0917797.3; completed Jan. 28, 2010 by Mr. Jeremy Cowen. |
Search Report for corresponding GB 0715481.8, Date of Search: Nov. 27, 2007. |
Sonka, et al.: “Image Processing, Analysis, and Machine Vision Second Edition”; Sep. 30, 1998, Thomson, XP002585208, ISBN: 053495393X, p. v-xii, p. 82-89. |
Sutcliffe, et al.: “Presence, Memory and Interaction in Virtual Environments”; International Journal of Human-Computer Studies, 62 (2005), pp. 307-327. |
Vamossy, et al.: “Virtual Hand—Hand Gesture Recognition System”; SISY 2007, 5th International Symposium on Intelligent Systems and Informatics, Aug. 24-25, 2007, Subolica, Serbia, IEEE, p. 97-102. |
German Office Action for corresponding DE 10 2009 034 413.6-53, issued Sep. 29, 2010. |
Number | Date | Country | |
---|---|---|---|
20100050133 A1 | Feb 2010 | US |