The present invention relates generally to interface systems, and specifically to a gesture recognition light and video image projector.
As the range of activities accomplished with a computer increases, new and innovative ways to provide an interface with a computer are often developed to complement the changes in computer functionality and packaging. For example, touch sensitive screens can allow a user to provide inputs to a computer without a mouse and/or a keyboard, such that desk area is not needed to operate the computer. Examples of touch sensitive screens include pressure sensitive membranes, beam break techniques with circumferential light sources and sensors, and acoustic ranging techniques. However, these types of computer interfaces can only provide information to the computer regarding the touch event, itself, and thus can be limited in application. In addition, such types of interfaces can be limited in the number of touch events that can be handled over a given amount of time, and can be prone to interpret unintended contacts, such as from a shirt cuff or palm, as touch events. Furthermore, touch sensitive screens can be prohibitively expensive and impractical for very large display sizes, such as those used for presentations.
Some interfaces can include imaging equipment to capture silhouette images that correspond to inputs. Such interfaces may include one or more light sources that provide the light to generate the silhouette images against a background. The source or sources of the light may need to be precisely located, and such interfaces typically require a manual calibration step that can be lengthy and can include additional equipment. As a result, such an interface can be expensive, and a setup time for the interface can be complicated and can require a high degree of precision. Furthermore, any change in the interface system (e.g., unintended bump) requires an additional manual calibration.
A system and method is provided for a gesture recognition interface system. The system comprises a projector configured to project colorless light and visible images onto a background surface. The projection of the colorless light can be interleaved with the projection of the visible images. The system also comprises at least one camera configured to receive a plurality of images based on a reflected light contrast difference between the background surface and a sensorless input object during projection of the colorless light. The system further comprises a controller configured to determine a given input gesture based on changes in relative locations of the sensorless input object in the plurality of images, and being further configured to initiate a device input associated with the given input gesture.
Another embodiment of the present invention includes a method of providing device inputs. The method comprises projecting non-visible light interleaved with visible images onto a background surface and providing simulated inputs over the background surface via gestures associated with a sensorless input object. The method also comprises generating a plurality of images associated with the sensorless input object based on a reflected light contrast between the sensorless input object and the non-visible light. The method also comprises determining a plurality of three-dimensional physical locations of the sensorless input object relative to the background surface, and determining if changes in the plurality of three-dimensional physical locations of the sensorless input object correspond to a given one of a plurality of predefined gestures. The method further comprises providing at least one device input based on determining that the changes in the plurality of three-dimensional physical locations of the sensorless input object correspond to the given one of the plurality of predefined gestures.
Another embodiment of the present invention includes a gesture recognition interface system. The system comprises means for projecting non-visible light interleaved with visible images onto a background surface, and means for generating a plurality of images of a sensorless input object based on a brightness contrast between the background surface and the sensorless input object. The system also comprises means for generating three-dimensional location information associated with the sensorless input object based on the plurality of images of the sensorless input object and means for translating changes in the three-dimensional location information associated with the sensorless input object to a given input gesture. The method further comprises means for providing device inputs based on matching the given input gesture with one of a plurality of predefined gestures.
The present invention relates generally to interface systems, and specifically to a gesture recognition light and video image projector. A projector is configured to project a colorless light that is interleaved with visible light images onto a background surface. The background surface can thus be implemented as a display, such as a large projected computer screen. The colorless light can be non-visible light, such as infrared (IR), and thus does not interfere with the visible light images. As an example, the projector can be a digital light projection (DLP) projector that includes a color wheel having both visible light lenses and a lens associated with colorless light.
A user employs a sensorless input object to provide simulated inputs to a computer or other electronic device associated with the visible light images. It is to be understood that the simulated inputs are provided by gestures using the sensorless input object. For example, the user could provide gestures that include motion and/or contact with a background surface using the sensorless input object. The sensorless input object could be, for example, the user's hand; a wand, stylus, pointing stick; or a variety of other devices with which the user can gesture. The simulated inputs could be, for example, simulated mouse inputs.
The colorless light illuminates the sensorless input object and the background surface behind the sensorless input object to generate a plurality of images of the sensorless input object that are captured by at least one camera. For example, the at least one camera can include a pair of stereo cameras. As such, the plurality of images of the sensorless input object could be a plurality of matched pairs of images of the sensorless input object, such that each image of the matched pair corresponds to the sensorless input object from a different perspective at substantially the same time. As another example, the at least one camera can be configured as a single camera that compares the location of the sensorless input object to a predefined pattern that is projected onto background surface. The plurality of images can be employed to determine a three-dimensional location of the sensorless input object, with changes in the location being employed to determine physical motion of the sensorless input object.
A controller can be operative to receive the plurality of images to determine the three-dimensional location information associated with the sensorless input object. The controller could then translate the simulated inputs into device inputs based on the three-dimensional location information. For example, the controller could interpret gesture inputs based on motion associated with the one or more end-points of the sensorless input object and translate the gesture inputs into inputs to a computer or other device. The controller could also compare the motion associated with the sensorless input object with a plurality of predefined gestures stored in a memory, such that a match with a given predefined gesture could correspond to a particular device input.
The projector 12 can project the visible light images to provide an output interface, such as, for example, computer monitor data, with which the user can interact and provide inputs. In the example of
At a time T0, the projector 12 projects the red light 54, the green light 56, and the blue light 58 in a sequence, with mirrors within the projector 12 reflecting the respective projected light into precise pixel locations onto the background surface 18. At the time T1, the projector 12 stops projecting visible light, the blue light 58 in the example of
The projector 12 projects the IR light 52 from the time T1 through a time T2. Therefore, the time T1 through the time T2 defines a gesture recognition period 62. During the gesture recognition period 62, the cameras 14 and 16 each capture an image of the IR light 52 reflected from the background surface 18. As a result, the cameras 14 and 16 can capture images that are based on a light contrast between the sensorless input object 20 and the background surface 18. Specifically, because the cameras 14 and 16 can have an IR filter, they can each be configured to capture a silhouette image of the sensorless input object 20 during the gesture recognition period 62. Therefore, the silhouette images can be determinative of a three-dimensional location of the sensorless input object 20, as described in greater detail below.
At a time T2, another visible image projection period 60 begins, followed by another gesture recognition period 62 beginning at a time T3. Likewise, at a time T4, yet another visible image projection period 60 begins, followed by yet another gesture recognition period 62 beginning at a time T5. Therefore, the projector 12 can continuously alternate between the visible image projection periods 60 and the gesture recognition periods 62. The projection of each of the red light 54, the green light 56, the blue light 58, and the IR light 52 can be for approximately equal amounts of time, and can be very rapid (e.g., approximately 4 milliseconds). Therefore, a user of the gesture recognition interface system 10 may not be able to ascertain any breaks in the visible images that are projected onto the background surface 18 during the gesture recognition periods 62. Accordingly, the projection of the IR light 52 may not interfere with the computer monitor data that is projected onto the background surface 18.
Referring back to the example of
The gesture recognition interface system 10 includes a controller 22. As an example, the controller 22 could reside within a computer (not shown) for which the gesture recognition interface system 10 is designed to provide gesture recognition interaction. It is to be understood, however, that the hosting of the controller 22 is not limited to a standalone computer, but could be included in embedded processors. The controller 22 is configured to receive each of the silhouette image pairs that are captured by the cameras 14 and 16 during the gesture recognition period 62 and to determine a three-dimensional location of the sensorless input object 20. The controller 22 can also be configured to correlate three-dimensional motion of the sensorless input object 20 across successive gesture recognition periods 62 as one of a plurality of predefined gestures. The predefined gestures can each correspond to a device input, such as an input to a computer with which the computer monitor data that is projected onto the background surface 18 as the visible light images can be associated. Accordingly, the sensorless input object 20 can be implemented by a user of the gesture recognition interface system 10 to simulate inputs over the background surface 18 that correspond to the device inputs.
As an example, the controller 22 can correlate motion of the sensorless input object 20 across successive gesture recognition periods 62 as two-dimensional motion of an end-point of the sensorless input object 20 across the background surface 18 as a mouse cursor, which can be projected as part of the monitor data by the projector 12. Furthermore, as another example, by determining the three-dimensional physical location of the sensorless input object 20, the controller 22 could interpret a touch of the background surface 18 by the end-point of the sensorless input object 20 as a left mouse-click. Accordingly, a user of the gesture recognition interface system 10 could navigate through a number of computer menus associated with a computer merely by moving his or her fingertip through the air above the background surface 18 and by touching icons projected onto the background surface 18.
As described herein in the example of
In the example of
The calibration of the gesture recognition interface system 10 can be an operation that is software driven within the controller 22 and/or the automated calibration unit 24. For example, upon the initiation of a calibration procedure, the projector 12 be configured to project a predefined calibration pattern onto the background surface 18 during the projection of the colorless light (i.e., during a gesture recognition period 62). As an example, the predefined calibration pattern can include a grid, an array of dots, and/or any of a variety of patterns having distinct and clearly ascertainable features. The location of the features of the predefined calibration pattern can be specifically defined by the automated calibration unit 24. Because the predefined calibration pattern is projected in the colorless light, such that it could be projected in non-visible light (e.g., IR light), the predefined calibration pattern can be invisible with respect to users of the gesture recognition interface system 10. Therefore, the projection of the predefined calibration pattern may not interfere with operation of the gesture recognition interface system 10.
The calibration procedure 80 demonstrates an overhead view of the projector 12 projecting a predefined calibration pattern 82 onto the background surface 18. The projection of the predefined calibration pattern 82 can be performed during a gesture recognition period 62, such that the predefined calibration pattern 82 is projected in the colorless light (e.g., IR light). As a result, the predefined calibration pattern 82 can be invisible to users of the gesture recognition interface system 10 as it is being projected onto the background surface 18.
In the example of
The gesture recognition interface system 10 in the example of
In addition, the example of
The gesture recognition interface system 100 includes the controller 22, the projector 12, the first camera 14, and the second camera 16. As described above, the projector 12 is configured to interleave the projection of visible light and colorless light. The visible light can be light that is projected to display visible images onto the background surface 18, such as computer monitor data. The controller 22 can thus provide image data 102 to the projector 12 that corresponds to the visible images that are projected onto the background surface 18 with which the user(s) can interact. The colorless light can be non-visible light, such as IR light, that is projected for gesture recognition interaction. Accordingly, the cameras 14 and 16 can each be timed to capture an image of the background surface 18 at each instant of the projection of the colorless light for gesture recognition, as described in greater detail below.
The controller 22 includes the automated calibration unit 24. As described above in the example of
In addition to the manual calibration of
Upon the initiation of a calibration procedure, the automated calibration unit 24 provides a predefined calibration pattern 106 to the projector 12. As such, the projector 12 can project the predefined calibration pattern 106 onto the background surface 18 in the colorless light. As an example, the predefined calibration pattern can include a grid, an array of dots, and/or any of a variety of patterns having distinct and clearly ascertainable features. The cameras 14 and 16 can thus capture images of the predefined calibration pattern 106, such that the precise pixel locations of the features of the predefined calibration pattern 106 can be correlated with precise two-dimensional locations on the background surface 18, as described in greater detail below.
During a gesture recognition period 62, as described in the example of
The background model images can be used to decide at each pixel whether the silhouette images of the sensorless input object 20 correspond with a binary 1 or 0. In the above described example of the sensorless input object 20 being a silhouette object in the foreground of an illuminated background, at each pixel location, if the sensorless input object 20 silhouette image has a value that is approximately less than the corresponding background model image times a threshold scaling value of between 0 and 1, the output value will be a binary 1, thus denoting the presence of the sensorless input object 20. In this manner, the scaling value can be selected to provide an optimal balance between desirably detecting the sensorless input object 20 while being substantially insensitive to residual shadows cast on the screen by an opposing source of illumination for the background surface 18.
The contrast enhanced binarized silhouette images of the sensorless input object 20 are then each input to an object detection algorithm device 112. The object detection algorithm device 112 can be an integrated circuit (IC) or set of ICs within the controller 22, or could be a software routine residing in the controller 22. The object detection algorithm device 112 can include any of a variety of detection algorithms for determining a two-dimensional location of the sensorless input object 20 relative to the background surface 18. As an example, because the location of each of the pixels that are projected onto the background surface can be identified by the controller 22 based on the calibration procedure, as described above, the object detection algorithm device 112 can determine which of the pixels are covered by the silhouette image of the sensorless input object 20. As another example, the object detection algorithm device 112 can include a two-dimensional Laplacian of Gaussian convolution filter that applies a mathematical algorithm to each of the digitized images of the sensorless input object 20 to determine the location of one or more end-points of the sensorless input object 20, such as fingertips, in two-dimensional space.
The composite data and/or images that are provided by the object detection algorithm devices 112 are input to a calibration data and location resolver 114. The calibration data and location resolver 114 determines a three-dimensional location of the sensorless input object 20 at a given time. As an example, the calibration data and location resolver 114 can be configured to compare the relative two-dimensional locations of the images of the sensorless input object 20 provided by each of the cameras 14 and 16 and to interpolate a three-dimensional location of the sensorless input object 20 based on a parallax separation of the respective images. The gesture recognition interface system 100 can be calibrated to identify the amount of physical separation of the two-dimensional images that corresponds to a height of the sensorless input object 20 relative to the background surface 18. Accordingly, a given value of separation could correspond to a height of zero, thus denoting a touch of an endpoint of the sensorless input object 20 (e.g., the user's fingertip) to the background surface 18.
In addition, the calibration data and location resolver 114 can be configured to receive an input 115 from the automated calibration unit 24 that signifies a calibration procedure. The input 115 can include information that corresponds to precise pixel locations of the features (e.g., dots, lines, intersections) of the predefined calibration pattern 106. The calibration data and location resolver 114 can thus be configured to receive the images of the predefined calibration pattern 106, as projected onto the background surface 18, and to associate the physical locations of the features of the projected predefined calibration pattern 106 with the information provided from the automated calibration unit 24. Accordingly, the calibration data and location resolver 114 can maintain a correlation between the three-dimensional physical location of the sensorless input object 20 relative to the projected image data 102, thus providing accurate interaction between the gestures provided by users and the image data 102.
The data output from the calibration data and location resolver 114 is input to a gesture recognition device 116. The gesture recognition device 116 interprets the three-dimensional location data associated with the sensorless input object 20 and translates changes in the location data into an input gesture. For example, the gesture recognition device 116 could translate two-dimensional motion of the sensorless input object 20 across the background surface 18 as a gesture associated with mouse cursor movement. The gesture recognition device 116 could also translate a touch of the background surface 18 as a gesture associated with a mouse left-button click. Because the gesture recognition device 116 implements the location data associated with the sensorless input object 20, it can be programmed to recognize any of a variety of gestures that utilize one or more fingertips of the user's hand. In this way, the gesture recognition interface system 100 has a much more versatile input capability than touch sensitive screens.
As an example, gestures that use multiple fingertips, or even fingertips from both hands, can be interpreted as input gestures that simulate zoom commands, rotate or “twist” commands, or even environment adjustments, such as volume and brightness control, all of which can be programmed for interpretation by the gesture recognition device 116. The gesture recognition device 116 can also be programmed to recognize gestures from multiple users simultaneously. For example, the gesture recognition device 116 can provide multi-point control capability, such that coordinated actions between two hands and/or between multiple users can be implemented. Furthermore, the gesture recognition device 116 can work in conjunction with other computer input devices, such as a conventional mouse or keyboard, to provide additional types of gesture inputs. In addition, the simulated commands may not even require touching the background surface. For example, a user could simulate a mouse left-click by rapidly moving his or her finger in a downward then upward direction in the space above the background surface, such that the gesture recognition device 116 evaluates not only changes in the three-dimensional location of the fingertip, but also a time threshold associated with its motion. Moreover, any of a variety of input gestures could be formed from six-degree of freedom motion based on changes in three-dimensional location and orientation of the sensorless input object 20 and any associated end-points.
The controller 22 could also include a predefined gesture memory 118 coupled to the gesture recognition device 116. The predefined gesture memory 118 could include a plurality of predefined gestures, with each of the predefined gestures corresponding to a particular device input. For example, the predefined gesture memory 118 could include a database of specific arrangements and combinations of fingertip positions and motions that each correspond to a different computer input. The gesture recognition device 116, upon receiving the three-dimensional location data associated with the one or more end-points of the sensorless input object 20 over a given time, could poll the predefined gesture memory 118 to determine if the gesture input matches a predefined gesture. Upon determining a match, the gesture recognition device 116 could translate the gesture input into the device input that corresponds to the predefined gesture. The predefined gesture memory 118 could be pre-programmed with the appropriate predefined gesture inputs, or it could be dynamically programmable, such that new gestures can be added, along with the corresponding device inputs. For example, a user could activate a “begin gesture sample” operation, perform the new gesture, capture the appropriate images of the new gesture using the first camera 14 and the second camera 16, and input the appropriate device input for which the new gesture corresponds.
It is to be understood that a given gesture recognition interface system is not intended to be limited by the example of
The gesture recognition simulation system 150 includes four cameras 158, each of which may include a respective filter for the type of colorless light. Accordingly, the cameras 158, the background surface 156, and an associated controller (not shown) collectively form a gesture recognition interface system, such as the gesture recognition interface system 10 in the example of
A sensorless input object 162 can be used to provide input gestures over the background surface 156. To provide the interaction between the sensorless input object 162 and the given functional component 160, the controller (not shown) can detect a three-dimensional physical location of a feature of the sensorless input object 162. For example, the controller could determine the three-dimensional physical location of a feature (e.g., endpoint) of the sensorless input object 162. Upon determining a correlation of the physical locations of the sensorless input object 162 and a given functional component 160, the controller can determine a gesture motion associated with the sensorless input object 162 to determine if it corresponds with a predefined action associated with the functional component 160. Upon determining that the input gesture corresponds with the predefined action, the simulation application controller can command the three-dimensional display system 152 to output the appropriate simulated action.
In the example of
The example of
In view of the foregoing structural and functional features described above, a methodology in accordance with various aspects of the present invention will be better appreciated with reference to
At 206, a plurality of images of the sensorless input object are generated based on a reflected light contrast between the sensorless input object and the illuminated background surface of the colorless light. The plurality of images could be a plurality of matched pairs of images, such that each image of the matched pair corresponds to the sensorless input object from a different perspective at substantially the same time. In the example of a reflective background surface, the background surface could appear to be much brighter than the user controlled sensorless input object. Therefore, the plurality of images could be silhouette images of the user controlled sensorless input object. Alternatively, the background surface could be far away, or IR dampening, such that the user controlled sensorless input object appears brighter. Therefore, the plurality of images could be illuminated images.
At 208, a plurality of three-dimensional physical locations of the sensorless input object are determined relative to the background surface. For example, a three-dimensional location of the sensorless input object can be determined by interpolating a first image associated with a first camera and a second image associated with a second camera. At 210, it is determined whether the physical motion associated with the sensorless input object corresponds to any of a plurality of predefined gestures. Changes in location of the three-dimensional location of at least one end-point of the sensorless input object could be determinative of the physical motion of the sensorless input object. The predefined gestures could be stored in a memory. Each predefined gesture could be associated with a different device input.
At 212, at least one device input is provided based on determining that the changes in the three-dimensional locations associated with the sensorless input object corresponds to a given one of the predefined gestures. Device inputs could be mouse inputs, such that two-dimensional motion across the background surface could simulate motion of a mouse cursor, and a touch of the background surface could simulate a mouse left-click. In addition, motion associated with multiple end-points could provide different types of inputs, such as rotate and zoom commands.
What have been described above are examples of the present invention. It is, of course, not possible to describe every conceivable combination of components or methodologies for purposes of describing the present invention, but one of ordinary skill in the art will recognize that many further combinations and permutations of the present invention are possible. Accordingly, the present invention is intended to embrace all such alterations, modifications and variations that fall within the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4468694 | Edgar | Aug 1984 | A |
4843568 | Krueger et al. | Jun 1989 | A |
4924506 | Crossley et al. | May 1990 | A |
5220441 | Gerstenberger | Jun 1993 | A |
5239373 | Tang et al. | Aug 1993 | A |
5475422 | Mori et al. | Dec 1995 | A |
5483261 | Yasutake | Jan 1996 | A |
5563988 | Maes et al. | Oct 1996 | A |
5913727 | Ahdoot | Jun 1999 | A |
5999185 | Kato et al. | Dec 1999 | A |
6002808 | Freeman | Dec 1999 | A |
6128003 | Smith et al. | Oct 2000 | A |
6147678 | Kumar et al. | Nov 2000 | A |
6195104 | Lyons | Feb 2001 | B1 |
6204852 | Kumar et al. | Mar 2001 | B1 |
6222465 | Kumar et al. | Apr 2001 | B1 |
6327381 | Rogina et al. | Dec 2001 | B1 |
6339748 | Hiramatsu | Jan 2002 | B1 |
6353428 | Maggioni et al. | Mar 2002 | B1 |
6359612 | Peter et al. | Mar 2002 | B1 |
6434255 | Harakawa | Aug 2002 | B1 |
6512507 | Furihata et al. | Jan 2003 | B1 |
6618076 | Sukthankar et al. | Sep 2003 | B1 |
6624833 | Kumar et al. | Sep 2003 | B1 |
6681031 | Cohen et al. | Jan 2004 | B2 |
6695770 | Choy et al. | Feb 2004 | B1 |
6714901 | Cotin et al. | Mar 2004 | B1 |
6720949 | Pryor et al. | Apr 2004 | B1 |
6788809 | Grzeszczuk et al. | Sep 2004 | B1 |
6796656 | Dadourian | Sep 2004 | B1 |
6806849 | Sullivan | Oct 2004 | B2 |
6857746 | Dyner | Feb 2005 | B2 |
6950534 | Cohen et al. | Sep 2005 | B2 |
6956573 | Bergen et al. | Oct 2005 | B1 |
6983065 | Akgul et al. | Jan 2006 | B1 |
7042440 | Pryor et al. | May 2006 | B2 |
7129927 | Mattsson | Oct 2006 | B2 |
7259747 | Bell | Aug 2007 | B2 |
7701439 | Hillis et al. | Apr 2010 | B2 |
20010006426 | Son et al. | Jul 2001 | A1 |
20010043719 | Harakawa et al. | Nov 2001 | A1 |
20020090146 | Heger et al. | Jul 2002 | A1 |
20020093666 | Foote et al. | Jul 2002 | A1 |
20020122113 | Foote | Sep 2002 | A1 |
20020126161 | Kuzunuki et al. | Sep 2002 | A1 |
20020186221 | Bell | Dec 2002 | A1 |
20030058341 | Brodsky et al. | Mar 2003 | A1 |
20030067537 | Myers | Apr 2003 | A1 |
20030085866 | Bimber | May 2003 | A1 |
20030156756 | Gokturk et al. | Aug 2003 | A1 |
20030218761 | Tomasi et al. | Nov 2003 | A1 |
20040046747 | Bustamante | Mar 2004 | A1 |
20040108990 | Lieberman et al. | Jun 2004 | A1 |
20040113885 | Genc et al. | Jun 2004 | A1 |
20040125207 | Mittal et al. | Jul 2004 | A1 |
20040183775 | Bell | Sep 2004 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20040239761 | Jin et al. | Dec 2004 | A1 |
20050002074 | McPheters et al. | Jan 2005 | A1 |
20050012817 | Hampapur et al. | Jan 2005 | A1 |
20050052714 | Klug et al. | Mar 2005 | A1 |
20050068537 | Han et al. | Mar 2005 | A1 |
20050088714 | Kremen | Apr 2005 | A1 |
20050110964 | Bell et al. | May 2005 | A1 |
20050151850 | Ahn et al. | Jul 2005 | A1 |
20050166163 | Chang et al. | Jul 2005 | A1 |
20050275628 | Balakrishnan et al. | Dec 2005 | A1 |
20050285945 | Usui et al. | Dec 2005 | A1 |
20050286101 | Garner et al. | Dec 2005 | A1 |
20060010400 | Dehlin et al. | Jan 2006 | A1 |
20060036944 | Wilson | Feb 2006 | A1 |
20060052953 | Vilanova et al. | Mar 2006 | A1 |
20060092178 | Tanguay | May 2006 | A1 |
20060125799 | Hillis et al. | Jun 2006 | A1 |
20060187196 | Underkoffler et al. | Aug 2006 | A1 |
20060192755 | Blythe et al. | Aug 2006 | A1 |
20060203363 | Levy-Rosenthal | Sep 2006 | A1 |
20060209021 | Yoo et al. | Sep 2006 | A1 |
20070024590 | Krepec | Feb 2007 | A1 |
20070064092 | Sandbeg et al. | Mar 2007 | A1 |
20080013793 | Hillis et al. | Jan 2008 | A1 |
20080028325 | Ferren et al. | Jan 2008 | A1 |
20080043106 | Hassapis et al. | Feb 2008 | A1 |
20080136973 | Park | Jun 2008 | A1 |
20080150913 | Bell et al. | Jun 2008 | A1 |
20080244468 | Nishihara et al. | Oct 2008 | A1 |
20090015791 | Chang et al. | Jan 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090116742 | Nishihara | May 2009 | A1 |
20090316952 | Ferren et al. | Dec 2009 | A1 |
20100050133 | Nishihara et al. | Feb 2010 | A1 |
Number | Date | Country |
---|---|---|
197 39 285 | Nov 1998 | DE |
0 571 702 | Dec 1993 | EP |
0 571 702 | Dec 1993 | EP |
0 913 790 | May 1999 | EP |
1 223 537 | Dec 2001 | EP |
1 689 172 | Aug 2006 | EP |
1 879 129 | Jan 2008 | EP |
1 879 130 | Jan 2008 | EP |
2 056 185 | May 2009 | EP |
2 068 230 | Jun 2009 | EP |
2460937 | Dec 2009 | GB |
62264390 | Jan 1987 | JP |
4271423 | Feb 1991 | JP |
04031996 | Feb 1992 | JP |
WO 9813746 | Apr 1998 | WO |
WO 0002187 | Jan 2000 | WO |
WO 0021023 | Apr 2000 | WO |
WO 0055802 | Sep 2000 | WO |
WO 03026299 | Mar 2003 | WO |
WO 2004091956 | Oct 2004 | WO |
WO 20050573978 | Jun 2005 | WO |
WO 2008001202 | Jan 2008 | WO |
Entry |
---|
William Daniel Hillis, et al.: “Gesture Recognition Simulation System and Method”; U.S. Appl. No. 11/485,790, filed Jul. 13, 2006. |
William Daniel Hillis, et al.: “Gesture Recognition Interface System”; U.S. Appl. No. 11/485,788, filed Jul. 13, 2006. |
Bretzner, et al.: “Hand Gesture Recognition Using Multi-Scale Colour Features, Hierarchical Models and Particle Filtering”; Automatic Face and Gesture Recognition, 2002, Proceedings. Fifth IEEE International Conference on, IEEE, Piscataway, NJ, USA, May 20, 2002, pp. 423-428, XP010949393, ISBN: 978-0-7695-1602-8, p. 2. |
British Search Report for corresponding GB 0909597.7 completed Sep. 17, 2009. |
British Search Report for corresponding GB0910067.8, completed Oct. 15, 2009. |
DE Office Action for corresponding DE 10 2009 043 798.3, issued Nov. 10, 2010. |
Dubois, et al.: “In Vivo Measurement of Surgical Gestures”; IEEE Transactions on Biochemical Engineering, vol. 49, No. 1, Jan. 2002, pp. 49-54. |
EP Search Report for corresponding EP 07 25 2716, completed Jun. 4, 2010. |
EP Search Report for corresponding EP 07 25 2870 completed Aug. 16, 2010 by Suphi Umut Naci of the Hague. |
European Search Report for corresponding EP 07 25 2717 completed Sep. 27, 2007 by Martin Müller of the EPO. |
Fiorentino, et al.: “Spacedesign: A Mixed Reality Workspace for Aesthetic Industrial Design”; Mixed and Augmented Reality, 2002. ISMAR 2002. Proceedings. International Symposium on Sep. 30-Oct. 1, 2002, Piscataway, NJ, USA, IEEE, Sep. 30, 2002, pp. 86-318, XP010620945, ISBN: 0-7695-1781-1; Abstract, Figs. 1, 2; p. 86, left-hand col., ¶4; p. 87, left-hand col., ¶4-right-hand col. |
German Office Action for corresponding DE 10 2009 034 413.6-53, issued Sep. 29, 2010. |
Hartley, et al.: “Multiple View Geometry in Computer Vision, Structure Computation”; Jul. 31, 2000, Multiple View Geometry in Computer Vision, Cambridge University Press, GB, pp. 295-311, XP002521742, ISBN: 9780521623049, pp. 295-311, figures 11.1, 11.2 & 11.7. |
Ishibuci, et al.: “Real Time Hand Gesture Recognition Using 3D Prediction Model”; Proceedings of the International Conference on Systems, Man and Cybernetics. Le Touquet, Oct. 17-20, 1993; New York, IEEE, US LNKD-DOI: 10.1109/ICSMC. 1993. 390870, vol. -, Oct. 17, 1993, pp. 324-328, XP010132504, ISBN: 978-0-7803-0911-1, pp. 325; figures 1-4. |
Kjeldsen, et al.: “Toward the Use of Gesture in Traditional User Interfaces”; Automatic Face and Gesture Recognition, 1996, Proceedings of the Second International Conference on Killington, VT, USA 14-16 Oct. 14, 1996' Los Alamitos, CA, USA, IEEE Comput. Soc., ISBN 978-0-8186-7713-7; whole document. |
Korida, K et al: “An Interactive 3D Interface for a Virtual Ceramic Art Work Environment”; Virtual Systems and Multimedia, 1997. VSMM '97. Proceedings., International Conference on Geneva, Switzerland Sep. 10-12, 1997, Los Alamitos, CA, USA, IEEE Comput. Soc, US, Sep. 10, 1997, pp. 227-234, XP010245649, ISBN: 0-8186-8150-0; Abstract, Figs. 1, 2, 5, 7-11. |
Leibe, et al.: “Toward Spontaneous Interaction with the Perceptive Workbench”; IEEE Computer Graphics and Applications; p. 54-65XP-000969594; Nov./Dec. 2000. |
Mitchell: “Virtual Mouse”; IP.COM Inc, West Henrietta, NY, US, May 1, 1992, ISSN 1533-0001; whole document. |
Office Action for corresponding DE 10 2009 025 236.3, issued May 2010. |
Pajares, et al.: “Usability Analysis of a Pointing Gesture Interface”; Systems, Man and Cybernetic, 2004 IEEE International Conference on , Oct. 10, 2004, ISBN 978-0-7803-8566-5; see e.g. sections 2.1 and 4. |
Pavlovic, et al.: “Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review”; Jul. 1, 1997, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Service Center, Los Alamitos, CA, US LNKD-DOI: 10.1109/34.598226, pp. 677-695, XP000698168, ISSN: 0162-8828, pages. 14-16, figure 5. |
Plesniak, W et al.: “Spatial Interaction with Haptic Holograms”; Multimedia Computing and Systems, 1999, IEEE International Conference on Florence, Italy Jun. 7-11, 1999, Los Alamitos, CA USA, IEEE Comput. Soc. US, vol. 1, Jun. 7, 1999, pp. 413-426, XP010342817 ISBN: 0-7695-0253-9; Abstract, Figs. 7, 8. |
Rehg: “visual Analysis of High DOF Articulated Objects with Application to Hand Tracking”; [online] 1995, XP002585209, School of Computer Science Carnegie Mellon University, Retrieved from the internet: URL: http//www.dtoc/,o;/cgi-bin/GetTRDoc?AD=ADA306677&Location=U2&doc=GetRDoc.pdf> [retrieved on May 25, 2010], p. 1, 28, 31. |
Sato, Y et al.: “Real-Time Input of 3D Pose and Gestures of a User's Hand and Its Applications for HCI”; Proceedings IEEE 2001 virtual Reality. (VR). Yokohama, Japan, Mar. 13, 2001, pp. 79-86, XP010535487; ISBN: 0-7695-0948-7; Abstract, Figs. 3, 4, 6, 8. |
Search Report for corresponding British application No. GB0917797.3; completed Jan. 28, 2010 by Mr. Jeremy Cowen. |
Search Report for corresponding GB 0715481.8, Date of Search: Nov. 27, 2007. |
Search Report for corresponding GB 0913330.7; Completed Nov. 3, 2009 by Dr. Russell Maurice. |
Sonka, et al.: “Image Processing, Analysis, and Machine Vision Second Edition”; Sep. 30, 1998, Thomson, XP002585208, ISBN: 053495393X, p. v-xii, p. 82-89. |
Sutcliffe, et al.: “Presence, Memory and Interaction in Virtual Environments”; International Journal of Human-Computer Studies, 62 (2005), pp. 307-327. |
Vámossy, et al.: “Virtual Hand—Hand Gesture Recognition System”; SISY 2007, 5th International Symposium on Intelligent Systems and Informatics, Aug. 24-25, 2007, Subolica, Serbia, IEEE, p. 97-102. |
European Search Report for corresponding EP 08 01 8146 completed May 15, 2012. |
Number | Date | Country | |
---|---|---|---|
20090115721 A1 | May 2009 | US |