The invention relates generally to creating a three-dimensional ordering of objects in a two-dimensional game application. Generally one can define three classes of such applications. A first class of two-dimensional game applications has an (x,y,z) coordinate system, responsive to at least real-world two-dimensional (xw, yw) input from a game player. A second class is similar to the first class but can respond to real-world three-dimensional (xw, yw, zw) input from a game player. A third class of two-dimensional game applications has only an (x,y) coordinate system, responsive to real-world two-dimensional (xw, yw) input from a game player, and the resultant game display video rendering, unless modified, lacks a sense of depth distance. Aspects of the present invention can enhance realism for the game player, especially for the first two classes of two-dimensional game applications, and can also enhance realism for the game player if the third class of game applications is modified to work with the invention.
Electronic games have long been played on PCs and on dedicated game consoles, including hand-held portable consoles, e.g., GameBoy©, Nintendo DS©, PlayStation©. User input mechanisms to control games have evolved from keyboards, mice, joysticks, track/touch pads, to touch screens, and more recently to three-dimensional natural interfaces. Natural interfaces track can track body parts of the game player using three-dimensional type imaging. Software then discerns the player's desired game control actions from movements of the player's arms, legs, torso, etc. The Kinect© console from Microsoft© uses such a natural interface, and game play is modified, substantially in real time, responsive to perceived movements of the game user's body. (The terms “user” and “player” are used interchangeable herein.)
Challenges associated with inputting three-dimensional commands to a two-dimensional application, typically a video game, will be described generally with respect to
When playing Angry Birds on a device with a touch screen, the player can touch the image of slingshot 20 and “pull-back” projectile 40 and elastic 30 in a desired (xw, yw) direction to propel projectile 40 toward target 60, 70. In
Some game device manufactures try to promote a sense of three-dimensionality in the display itself. Some game devices might produce a three-dimensional display requiring the player to wear stereoscopic glasses, or perhaps the display will be auto-stereoscopic, which would not require eye glasses to be worn by the player. The Nintendo© 3DS© mobile game device uses an auto-stereoscopic display to promote a three-dimensional experience, although the user interface still requires buttons and physical touching.
What is needed is a method and system whereby a two-dimensional game application, be class one, class two, or class three, may be modified if needed and played by responding to true three-dimensional real-world interface and corresponding timing attributes, including natural interface body and limb gestures made by the game player, without need to physically contact the game-rendering device or display. Further such method and system should enable the two-dimensional game application to convert to an integrated three-dimensional input and output framework. Such result could enable the game video display to present a sense of depth along a game display z-axis, as viewed from the perspective of the game player, and should enable the player to alter or define line of sight control in three-dimensional space. The game application could be played on a small, handheld or portable device, without need for physical contact by the game player.
The present invention provides such systems and methods.
The present invention enables a game player or user to input three-dimensional positional and time (xw, yw, zw, tw) natural gesture commands to an electronic two-dimensional game application, (class one, class two, or class three) and enable the game application to render images on a conventional planar (x,y) display in which there is a notion of (zv or virtual z-axis) continuous front, middle, back game depth, in additional to virtual game-world coordinate (xv, yv) game movements. The game may be displayed and played upon a handheld portable device such as a smart phone, a tablet, etc.
Preferably the present invention maps the (xw, yw, zw, tw) real-world data created in response to player movement to game-world virtual coordinates (xv, yv, zv, tv) coordinates, or vice versa. In this notation (xw, yw, zw) are the usual real world geometric (or spatial) three-dimensional measurements and (tw) is the corresponding real world timing attribute. Similarly, (xv, yv, zv) are the virtual geometric (or spatial) three-dimensional measurements and (tv) is the corresponding game virtual timing (or clocks) attribute. Communicating duality of such real-to-virtual or virtual-to-real two-way mapping is facilitated by speaking of a unified space/time coordinate (xu, yu, zu, tu) system between the real and virtual worlds. Depending on the direction of mapping, (xu, yu, zu, tu) may be (xw, yw, zw, tw) or (xv, yv, zv, tv) or a weighted combination of the two. It is convenient to simply drop dimension (t) from the tuple (x,y,z,t) since (t) is implied. Thus, the simplified (x,y,z) expression is understood to include (t) when applicable. Indeed, one can speak of using at least three-dimensional (x,y,z) data, with an implied fourth parameter (t).
Any desired real-world to virtual-world scaling is performed to make player gesture movements realistic to the scale in spatial and timing dimensions of the rendered game image. The game application is then interfaced with to cause the game to respond to the detected player gesture or other movements. Preferably this interaction is perceived from the eye or viewpoint of the player. The presence of real-time generated (xw, yw, zw) data enables game play to be responsive to natural gestures made by the player. One aspect of the virtual world data that is created is the player or an icon or avatar of the player can be rendered on the game display, allowing the player to become part of the on-going game play. In some instances the game application may require some modification to interact with input from the present invention to properly render game-world play and imagery.
Embodiments of the present invention enable the game player to dynamically interact not only with a displayed target, but to also interact with a region in front of or behind the target, since a virtual z, game-world axis is created. A more realistic and natural game play experience results, using devices as small as a smart phone to execute the game application and to render the game display. Use of three-dimensional input preferably enables a natural user interface, which allows the player to interact with the game by moving parts of the player's body to create gesture(s) recognizable by the game application. Further, use of the present invention allows the game application developer to create display screen content, as seen from the perspective of the game player's eye. One result is that the player may alter or define line-of-sight control in three-dimensional space. Another result is that the player can interact with three-dimensional input with game application rendered displayed object(s) that are perceived in three-dimensional space.
In some embodiments three-dimensional real-world (xw, yw, zw) input to the game application is created using at least two spaced-apart, generic two-dimensional cameras coupled to software to recognize player gestures and movements. However other methods of inputting three-dimensional (x,y,z) data to the game playing device may also be used, e.g., a time-of-flight camera.
Other features and advantages of the invention will appear from the following description in which the preferred embodiments have been set forth in detail, in conjunction with their accompanying drawings.
It is useful at this juncture to consider various permutations of two-dimensional or three-dimensional display, with two-dimensional or three-dimensional input to an application that may be a game application. Table 1 provides a classification overview in which rows show competing display technologies, and columns summarize competing input techniques. Relevant to embodiments of the present invention are entries in Table 1 where some form of three-dimensional input is available, namely table entries denoted [B3], [B4], [C3], [C4], [D3] and [D4]). In Table 1, “2D” and “3D” are used as shorthand to denote “two-dimensional” and “three-dimensional”, respectively. The term “collision” in Table 1 refers to the hypothetical meeting of an input object with a game object in real space or in virtual game space, just as a tennis racket (input object) hits a tennis ball (a game object).
An understanding of the noted entries in Table 1, above, is relevant to understanding the challenges associated with integrating (or unifying or mapping) the virtual game space coordinate system of a two-dimensional game application with the real-world (x,y,z) three-dimensional coordinates used by a game player to interface with the game. See for example Table 1 entries [A3], [A4], [C3], [C4], [D3], [D4].
Table 1 uses the notation (xw, yw, zw) to refer to real world three dimensional coordinates, and the notation (xv, yv, zv) to refer to game application or virtual world coordinates. These coordinate values define the geometric or spatial attributes of a real or virtual entity. Of course events in real world or virtual world also have a time dimension. For example the game player's fingertip may beat location (5 cm, 10 cm, 20 cm) at time 2200 seconds starting from an arbitrary zero base time. The notation (xw, yw, zw, tw) thus refers to both geometric (spatial) and timing data of real world objects. There is also a notion of time in the game application that is driven by the internal clock of the application, which game application internal clock may be related to or may be completely independent of world time. A virtual world coordinate is thus defined by the notation (xv, yv, zv, tv) where tv defines the time dependent values of virtual world objects or events. As noted earlier, it is convenient herein to simply drop dimension (t) from the tuple (x,y,z,t) since (t) is implied when acceptable. Thus the notion used here is generally of the simplified form (x,y,z).
In Table 1, entry [C3], the real-world (xe, ye, ze) coordinates of the player's eye or view point is unknown, and it is necessary to construct approximately where that location would be. A rough analogy might be the viewing of a movie by a patron in a movie theater. The patron sees the film from the perspective of the camera(s) that captured the scene, rather than from the perspective of his or her own eyes. What is thus viewed is a two-dimensional image, and even if the patron gets up and changes seats in the theater, what is viewed on the movie screen remains the same. In [D3] the film could be a three-dimensional film, but on in which all viewing angles are constrained to be from the same perspective, namely the view point of the camera(s) that captured the film image.
By contrast, in Table 1 entry [C4], the actual real-world (xe, ye, ze) coordinate of the player's (or patron's) eye is known, and the scene that is viewed is drawn or rendered accordingly. The movie patron sees the scene not from the vantage point of the camera(s) that recorded the scene, but rather from the patron's own vantage point. If the patron gets up and changes seats in the theater, what is viewed on the movie screen changes. Indeed change could occur if the patron simply rotated his or eye head for a better view. Thus with respect to [C4] systems, the film is truly three-dimensional and viewing angles change with the position of the patron's eye or point of view. The remaining entries in Table 1 are self-explanatory to those skilled in the relevant art and need not be further described herein.
In conventional prior art implementations, the device executing the game application seeks to match the location of slingshot 20 with respect to target 60, 70. Game software then displays projectile 40 moving towards the target along a mathematically determined rendered trajectory, i.e., 50′. If accurately aimed, the projectile smashes into the target, and the resultant mayhem is computed and rendered on display 10 by the game application software, using virtual game software coordinates. Game rules determine points earned for the player, etc. Some prior art approaches have a notion of projecting internal game space onto the display screen. But, absent true three-dimensional input from the player, such games cannot render a display and game experience that mimics realistic three-dimensional player interaction with the game. For example, in prior art
For purposes of the present invention, it is relatively unimportant how real-world three-dimensional (xw, yw, zw) information is created. However preferably this data is generated substantially in real-time, and is true three-dimensional data that has been corrected for any distortion associated with the gathering of the data, including optical distortion associated with data-gathering cameras. Thus the expression (xw, yw, zw) is understood to refer to true, undistorted, three-dimensional real-world input data.
In
The overlapping fields of view of the two cameras define a three-dimensional hover zone that can include the surface of display 10, a three-dimensional space within which player gestures may be captured optically and interpreted. Preferably the cameras are pre-calibrated and are mounted at opposite corners of device 220 to better capture player gestures. Aggregated frames of two-dimensional information acquired by the cameras are communicated at a frame rate for processing by an electronic system, associated with module 250 in
In a preferred embodiment, spaced-apart two-dimensional cameras 240-A, 240B are disposed at opposite corners of device 220 such that their overlapping optical fields of view (FOV) define a three-dimensional zone in which the cameras can image at least portions of the game player 100. The cameras can be mounted such that the overlapping FOV grazes the surface of display 10, to also detect (if desired) physical contact by player 100 with the surface of display 10. In
In
In the embodiment of
In the embodiment of
Assume then that real world three-dimensional (xw, yw, zw) input player data is being generated for use by the present invention, the precise method of generation not being critical. Referring now to
Consider now
Defined below is a preferred linear three-dimensional transformation that maps real-world three-dimensional input coordinates to virtual game world coordinates, according to embodiments of the present invention. For ease of explanation, at this juncture it is assumed that any optical distortion associated with the acquisition of natural interface three-dimensional input data by the game system is either negligible or has already been corrected. The preferred transformation is given by:
where R and T may be conventional rotation and translation matrices from real-world coordinates to virtual world coordinates.
A scaling factor may be introduced into the above transformation to account for differences in measurement units. For instance, the game application may require a 10 cm movement by the player in the real world to represent a 10 m distance in the virtual game world. It is understood that each of the three dimensions (x,y,z) may have its own, different, scaling factor value. The same scaling may be applied to other terms, such as time, velocity, inertia, etc., associated with game movements, user gestures, etc. coupled to the game system.
Using the above preferred transformation, player created real-world three-dimensional input data can be represented in the same three-dimensional virtual world, along with objects that are rendered (or materialized) in the three-dimensional game space. In the above transformation, the definition of R will depend upon the values and effects that the game application engine wants to achieve, although in the simplest form, R could be a unity matrix. The game application engine can also alter values for T.
Those skilled in the art will recognize that for visual purposes the images of such three-dimensional real world and virtual world objects can be rendered on a display screen using the usual parallel or perspective projections.
Thus in
The three-dimensional output data generated by software associated with embodiments of the present invention may be used to extend the virtual game world into the real world, as shown in the embodiment of
In the embodiments of
In practice, optical calibration associated with the lens in data-gathering cameras 240A, 240B, or in the time-of-flight embodiment, associated with TOF camera 270, wants to be accounted for to minimize inaccuracies. For example, uncorrected camera lens distortion can produce erroneous location coordinates for the game player's head with respect to the camera(s), especially if the head moves to a location beyond the overlapping field of view of two-dimensional cameras 240A, 240B. In the embodiment of
As noted embodiments of the present invention relate to methods of using three-dimensional input data to enable two-dimensional game applications to render more realistic displays to enhance realism. However, successful interaction between game applications and the present realistically require the use of calibration information to un-distort images acquired by 240A, 240B. However as shown by the uncorrected image 280 in
Implementing target aiming using natural interfaces with a portable or handheld device (perhaps a smart phone) can be a challenging proposition. Referring to
As noted with respect to
In the embodiment of
In the embodiment depicted in
An advantage of embodiments of the present invention with respect to target visibility and aiming is shown in
In many embodiments of the present invention, natural gesture recognition, enabled by the three-dimensional input data captured as the player interacts with the game application, is employed. Thus in
Recall that system 260 (in
Returning to
Step 520 in
Following method steps 510 and 520, given real world coordinate system (xw, yw, zw), and virtual coordinate system (xv, yv, zv), step 530 in
Block 540 in
Following method step 540, initialization typically has been completed and run-time loop application commences with method step 550. Of course the term “loop” does not mean that game application 242 is required to always follow a pre-determined loop. Instead, the game application logic may dictate that the application state follow a complex network of states. But for clarity of illustration and brevity, the game application run-time logic may be defined as a loop.
Refer now to
Method step 560 in
If three-dimensional display is desired, at step 570 preferably module 260 (see
In
It is desired to properly determine scale of the above noted transformations. For example, a representation of the player should appear to reach a game object such as a ball in virtual game space. Regardless of direction of translation at step 570 or step 580, a unification of geometric and timing scaling is preferably carried out at step 590 within system 260 to produce a unified real/virtual coordinates system (xu, yu, zu, tu). For instance, in a given game application a projectile that travels at Mach magnitude speed along a trajectory may be slowed to slower speeds commensurate with the real world speed of player hand movements. Alternatively, in some applications real world speed of player hand movements must be up-scaled to Mach magnitude of virtual world speeds commensurate with the game application. Thus if the processing flow to method step 590 arrives from method step 580, then (xu, yu, zu, tu) is closer or the same as (xv, yv, zv, tv), and if the processing flow to method step 590 arrived from method step 570, then (xu, yu, zu, tu) is closer or the same as (xw, yw, zw, tw).
In
Method step 630 in
At method step 640, the game application clock (or in a more general term, the game application logic) is advanced to the next state. If the current state of the game application requires updating any coordinate transformation, timing or scaling parameters between the virtual world coordinates and real world coordinates (as described for steps 530 and 540), such parameters are updated at method step 640. The previous steps in the loop, namely steps 570, 580, 590, will use the updated information for their computation that takes place in system module 260, for use by game system module 245.
In some instances it is desirable or necessary to alter game application 242 to enable it to function smoothly with information provided to it by the present invention. For example, referring to entry [C2] in Table 1, a game application 242 may support internal virtual three-dimensional coordinates, but the method to drive that application is a traditional two-dimensional input, e.g., a touch display screen, a keyboard, etc. The game application may already map two-dimensional inputs (e.g., CTRL (Control) key+up/down arrow keys) to the z-dimension for complete virtual three-dimensional actions. More specifically, operation of these arrow keys may change zv values in game application (xv, yv, zv) coordinates while maintaining the values of xv and yv. Substantially simultaneously values of xv and yv may be changed using the left/right and up/down arrow keys, without the CTRL key, and can be assigned to change values of xv and yv, correspondingly. But clearly, in the absence of good three-dimensional input (xw, yw, zw), the player-game interaction will not be very natural. Thus it is desirable to modify this game application to make the application aware of the three-dimensional input data, e.g., (xw, yw, zw) originally gathered by system 230. In essence, game application 242 is modified to game application 242′ to behave like an application in category [C3] in Table 1. In a preferred embodiment, the transformations in method steps 580 and 590 (see
It is understood that embodiments of the present invention may be used with game and/or with other applications that can respond to three-dimensional input. As noted devices executing such application may include mobile or handheld platforms including smart phones, tablets, portable game consoles, laptop and netbook computers, in addition to non-mobile platforms, perhaps PCs, TVs, entertainment boxes, among other appliances. Those skilled in the art will appreciate that, in addition to enhancing game playing, embodiments of the present invention could also be deployed to enable a user to manipulate menu(s) rendered on a display screen, to engage virtual selection button(s) rendered on a display screen, etc.
Modifications and variations may be made to the disclosed embodiments without departing from the subject and spirit of the invention as defined by the following claims.
This application is a continuation of applicants' co-pending U.S. patent application Ser. No. 13/506,474 filed Apr. 20, 2012 that claims priority to U.S. Provisional Patent Application No. 61/517,657, filed on Apr. 25, 2011, both of which are incorporated by reference as if set forth in their entirety herewith.
Number | Name | Date | Kind |
---|---|---|---|
5454043 | Freeman et al. | Sep 1995 | A |
5852672 | Lu | Dec 1998 | A |
6191773 | Maruno et al. | Feb 2001 | B1 |
6323942 | Bamji | Nov 2001 | B1 |
6512838 | Rafii et al. | Jan 2003 | B1 |
6515740 | Bamji et al. | Feb 2003 | B2 |
6522395 | Bamji et al. | Feb 2003 | B1 |
6580496 | Bamji et al. | Jun 2003 | B2 |
6587186 | Bamji et al. | Jul 2003 | B2 |
6614422 | Rafii | Sep 2003 | B1 |
6674895 | Rafii et al. | Jan 2004 | B2 |
6678039 | Charbon | Jan 2004 | B2 |
6690354 | Sze | Feb 2004 | B2 |
6690618 | Tomasi et al. | Feb 2004 | B2 |
6710770 | Tomasi et al. | Mar 2004 | B2 |
6834120 | LeClerc et al. | Dec 2004 | B1 |
6876775 | Torunoglu | Apr 2005 | B2 |
6906793 | Bamji et al. | Jun 2005 | B2 |
6919549 | Bamji et al. | Jul 2005 | B2 |
7006236 | Tomasi et al. | Feb 2006 | B2 |
7038659 | Rajkowski | May 2006 | B2 |
7050177 | Tomasi et al. | May 2006 | B2 |
7151530 | Roeber et al. | Dec 2006 | B2 |
7157685 | Bamji et al. | Jan 2007 | B2 |
7173230 | Charbon | Feb 2007 | B2 |
7176438 | Bamji et al. | Feb 2007 | B2 |
7203356 | Gokturk et al. | Apr 2007 | B2 |
7212663 | Tomasi | May 2007 | B2 |
7283213 | O'Connor et al. | Oct 2007 | B2 |
7310431 | Gokturk et al. | Dec 2007 | B2 |
7321111 | Bamji et al. | Jan 2008 | B2 |
7340077 | Gokturk et al. | Mar 2008 | B2 |
7352454 | Bamji et al. | Apr 2008 | B2 |
7375803 | Bamji | May 2008 | B1 |
7379100 | Gokturk et al. | May 2008 | B2 |
7379163 | Rafii et al. | May 2008 | B2 |
7405812 | Bamji | Jul 2008 | B1 |
7408627 | Bamji et al. | Aug 2008 | B2 |
7433029 | Hsu | Oct 2008 | B1 |
7450220 | O'Connor et al. | Nov 2008 | B2 |
7464351 | Bamji et al. | Dec 2008 | B2 |
7471376 | Bamji et al. | Dec 2008 | B2 |
7507947 | Bamji et al. | Mar 2009 | B2 |
7511801 | Rafii et al. | Mar 2009 | B1 |
7526120 | Gokturk et al. | Apr 2009 | B2 |
7636150 | McCauley et al. | Dec 2009 | B1 |
7653833 | Miller et al. | Jan 2010 | B1 |
7653883 | Hotelling et al. | Jan 2010 | B2 |
7665041 | Wilson et al. | Feb 2010 | B2 |
7719662 | Bamji et al. | May 2010 | B2 |
7741961 | Rafii et al. | Jun 2010 | B1 |
7791715 | Bamji | Sep 2010 | B1 |
7805003 | Cohen et al. | Sep 2010 | B1 |
7877707 | Westerman et al. | Jan 2011 | B2 |
7936449 | Bamji et al. | May 2011 | B1 |
7994465 | Bamji et al. | Aug 2011 | B1 |
8009871 | Rafii et al. | Aug 2011 | B2 |
D645493 | Zhao | Sep 2011 | S |
8086971 | Radivojevic et al. | Dec 2011 | B2 |
8134637 | Rossbach | Mar 2012 | B2 |
8139141 | Bamji et al. | Mar 2012 | B2 |
8139142 | Bamji et al. | Mar 2012 | B2 |
8179604 | Prada Gomez et al. | May 2012 | B1 |
8180114 | Nishihara et al. | May 2012 | B2 |
8187097 | Zhang | May 2012 | B1 |
8194233 | Bamji | Jun 2012 | B2 |
8203699 | Bamji et al. | Jun 2012 | B2 |
8212859 | Tang et al. | Jul 2012 | B2 |
8232990 | King et al. | Jul 2012 | B2 |
8265350 | Torii et al. | Sep 2012 | B2 |
8274535 | Hildreth et al. | Sep 2012 | B2 |
8314924 | Bamji et al. | Nov 2012 | B2 |
8339359 | Hsieh et al. | Dec 2012 | B2 |
8363212 | Bamji et al. | Jan 2013 | B2 |
8368795 | Lo et al. | Feb 2013 | B2 |
8462132 | Ren et al. | Jun 2013 | B2 |
8525876 | Fan et al. | Sep 2013 | B2 |
8539359 | Rapaport et al. | Sep 2013 | B2 |
8587773 | Bamji et al. | Nov 2013 | B2 |
8589033 | Rafii et al. | Nov 2013 | B2 |
8602887 | Tardif et al. | Dec 2013 | B2 |
8615108 | Stoppa et al. | Dec 2013 | B1 |
8655021 | Dal Mutto et al. | Feb 2014 | B2 |
8675182 | Bamji | Mar 2014 | B2 |
8681124 | Bamji et al. | Mar 2014 | B2 |
8686943 | Rafii | Apr 2014 | B1 |
8693724 | Ahmed et al. | Apr 2014 | B2 |
8773512 | Rafii | Jul 2014 | B1 |
8787663 | Litvak et al. | Jul 2014 | B2 |
8824737 | Gurman et al. | Sep 2014 | B2 |
8830312 | Hummel et al. | Sep 2014 | B2 |
8836768 | Rafii et al. | Sep 2014 | B1 |
8840466 | Kareemi et al. | Sep 2014 | B2 |
8854433 | Rafii | Oct 2014 | B1 |
20020112095 | Ford et al. | Aug 2002 | A1 |
20020140633 | Rafii et al. | Oct 2002 | A1 |
20030021032 | Bamji et al. | Jan 2003 | A1 |
20030132921 | Torunoglu et al. | Jul 2003 | A1 |
20030132950 | Surucu et al. | Jul 2003 | A1 |
20030165048 | Bamji et al. | Sep 2003 | A1 |
20030169906 | Gokturk et al. | Sep 2003 | A1 |
20030172375 | Shaw et al. | Sep 2003 | A1 |
20030174125 | Torunoglu et al. | Sep 2003 | A1 |
20040046744 | Rafii et al. | Mar 2004 | A1 |
20040066500 | Gokturk et al. | Apr 2004 | A1 |
20040170323 | Cootes et al. | Sep 2004 | A1 |
20050134853 | Ingleson et al. | Jun 2005 | A1 |
20050238229 | Ishidera | Oct 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20060197753 | Hotelling | Sep 2006 | A1 |
20060241371 | Rafii et al. | Oct 2006 | A1 |
20060272436 | Lein et al. | Dec 2006 | A1 |
20070057946 | Albeck et al. | Mar 2007 | A1 |
20070299631 | Macbeth et al. | Dec 2007 | A1 |
20080059390 | Cox et al. | Mar 2008 | A1 |
20080120577 | Ma et al. | May 2008 | A1 |
20080281523 | Dahl et al. | Nov 2008 | A1 |
20090021489 | Westerman et al. | Jan 2009 | A1 |
20090077161 | Hamilton, II et al. | Mar 2009 | A1 |
20090079813 | Hildreth | Mar 2009 | A1 |
20090103780 | Nishihara et al. | Apr 2009 | A1 |
20090110301 | Schopp et al. | Apr 2009 | A1 |
20090153671 | Lee et al. | Jun 2009 | A1 |
20090183125 | Magal et al. | Jul 2009 | A1 |
20090290811 | Imai | Nov 2009 | A1 |
20090307658 | Freitas et al. | Dec 2009 | A1 |
20100027845 | Kim et al. | Feb 2010 | A1 |
20100027846 | Xu et al. | Feb 2010 | A1 |
20100027892 | Guan et al. | Feb 2010 | A1 |
20100053151 | Marti et al. | Mar 2010 | A1 |
20100110384 | Maekawa | May 2010 | A1 |
20100124949 | Demuynck et al. | May 2010 | A1 |
20100156676 | Mooring et al. | Jun 2010 | A1 |
20100192109 | Westerman et al. | Jul 2010 | A1 |
20100199228 | Latta et al. | Aug 2010 | A1 |
20100202663 | Kim et al. | Aug 2010 | A1 |
20100208038 | Kutliroff et al. | Aug 2010 | A1 |
20100211920 | Westerman et al. | Aug 2010 | A1 |
20100229125 | Cha | Sep 2010 | A1 |
20100235786 | Maizels et al. | Sep 2010 | A1 |
20100265316 | Sali et al. | Oct 2010 | A1 |
20100271511 | Ma et al. | Oct 2010 | A1 |
20100284082 | Shpunt et al. | Nov 2010 | A1 |
20100296368 | Dahl et al. | Nov 2010 | A1 |
20100306714 | Latta et al. | Dec 2010 | A1 |
20100321389 | Gay et al. | Dec 2010 | A1 |
20100329511 | Yoon et al. | Dec 2010 | A1 |
20110025827 | Shpunt et al. | Feb 2011 | A1 |
20110052006 | Gurman et al. | Mar 2011 | A1 |
20110069389 | Shpunt | Mar 2011 | A1 |
20110075259 | Shpunt | Mar 2011 | A1 |
20110096954 | Dahl | Apr 2011 | A1 |
20110103448 | Dahl et al. | May 2011 | A1 |
20110114857 | Akerman et al. | May 2011 | A1 |
20110115892 | Fan et al. | May 2011 | A1 |
20110134036 | Suggs | Jun 2011 | A1 |
20110134114 | Rais et al. | Jun 2011 | A1 |
20110148798 | Dahl | Jun 2011 | A1 |
20110149044 | Snin | Jun 2011 | A1 |
20110158508 | Shpunt et al. | Jun 2011 | A1 |
20110164032 | Shadmi | Jul 2011 | A1 |
20110173574 | Clavin et al. | Jul 2011 | A1 |
20110187878 | Mor et al. | Aug 2011 | A1 |
20110188054 | Petronius et al. | Aug 2011 | A1 |
20110197161 | Mattingly et al. | Aug 2011 | A1 |
20110205421 | Shpunt et al. | Aug 2011 | A1 |
20110211044 | Shpunt et al. | Sep 2011 | A1 |
20110211754 | Litvak et al. | Sep 2011 | A1 |
20110219340 | Pathangay et al. | Sep 2011 | A1 |
20110222726 | Ruan | Sep 2011 | A1 |
20110243380 | Forutanpour et al. | Oct 2011 | A1 |
20110254762 | Dahl et al. | Oct 2011 | A1 |
20110254765 | Brand | Oct 2011 | A1 |
20110262006 | Nakano | Oct 2011 | A1 |
20110267456 | Adermann | Nov 2011 | A1 |
20110274357 | Iwamoto et al. | Nov 2011 | A1 |
20110286673 | Givon et al. | Nov 2011 | A1 |
20110289455 | Reville et al. | Nov 2011 | A1 |
20110291925 | Israel et al. | Dec 2011 | A1 |
20110291926 | Gokturk et al. | Dec 2011 | A1 |
20110291988 | Bamji et al. | Dec 2011 | A1 |
20110292036 | Sali et al. | Dec 2011 | A1 |
20110292181 | Acharya et al. | Dec 2011 | A1 |
20110292370 | Hills et al. | Dec 2011 | A1 |
20110292380 | Bamji | Dec 2011 | A1 |
20110293137 | Gurman et al. | Dec 2011 | A1 |
20110294574 | Yamada et al. | Dec 2011 | A1 |
20110295562 | Mehta et al. | Dec 2011 | A1 |
20110296353 | Ahmed et al. | Dec 2011 | A1 |
20110298704 | Krah | Dec 2011 | A1 |
20110300929 | Tardif et al. | Dec 2011 | A1 |
20110310010 | Hoffnung et al. | Dec 2011 | A1 |
20110310125 | McEldowney et al. | Dec 2011 | A1 |
20120011454 | Droz et al. | Jan 2012 | A1 |
20120027252 | Liu et al. | Feb 2012 | A1 |
20120038986 | Pesach | Feb 2012 | A1 |
20120042150 | Saar | Feb 2012 | A1 |
20120042246 | Schwesinger et al. | Feb 2012 | A1 |
20120050488 | Cohen et al. | Mar 2012 | A1 |
20120051605 | Nagar et al. | Mar 2012 | A1 |
20120070070 | Litvak | Mar 2012 | A1 |
20120072939 | Crenshaw | Mar 2012 | A1 |
20120078614 | Galor et al. | Mar 2012 | A1 |
20120092304 | Katz | Apr 2012 | A1 |
20120099403 | Dahl et al. | Apr 2012 | A1 |
20120106792 | Kang et al. | May 2012 | A1 |
20120124604 | Small et al. | May 2012 | A1 |
20120140094 | Shpunt et al. | Jun 2012 | A1 |
20120140109 | Shpunt et al. | Jun 2012 | A1 |
20120169583 | Rippel et al. | Jul 2012 | A1 |
20120169671 | Yasutake | Jul 2012 | A1 |
20120176414 | Givon | Jul 2012 | A1 |
20120182464 | Shpunt et al. | Jul 2012 | A1 |
20120202569 | Maizels et al. | Aug 2012 | A1 |
20120204133 | Guendelman et al. | Aug 2012 | A1 |
20120204202 | Rowley et al. | Aug 2012 | A1 |
20120206339 | Dahl | Aug 2012 | A1 |
20120218183 | Givon et al. | Aug 2012 | A1 |
20120223882 | Galor et al. | Sep 2012 | A1 |
20120243374 | Dahl et al. | Sep 2012 | A1 |
20120249744 | Pesach et al. | Oct 2012 | A1 |
20120268364 | Minnen | Oct 2012 | A1 |
20120270653 | Kareemi et al. | Oct 2012 | A1 |
20120274550 | Campbell et al. | Nov 2012 | A1 |
20120274610 | Dahl | Nov 2012 | A1 |
20120281240 | Cohen et al. | Nov 2012 | A1 |
20120299820 | Dahl | Nov 2012 | A1 |
20120304067 | Han et al. | Nov 2012 | A1 |
20120306876 | Shotton et al. | Dec 2012 | A1 |
20120309532 | Ambrus et al. | Dec 2012 | A1 |
20120313848 | Galor et al. | Dec 2012 | A1 |
20120313900 | Dahl | Dec 2012 | A1 |
20120327125 | Kutliroff et al. | Dec 2012 | A1 |
20130014052 | Frey et al. | Jan 2013 | A1 |
20130038601 | Han et al. | Feb 2013 | A1 |
20130038881 | Pesach et al. | Feb 2013 | A1 |
20130038941 | Pesach et al. | Feb 2013 | A1 |
20130044053 | Galor et al. | Feb 2013 | A1 |
20130050080 | Dahl et al. | Feb 2013 | A1 |
20130055120 | Galor et al. | Feb 2013 | A1 |
20130055143 | Martin et al. | Feb 2013 | A1 |
20130055150 | Galor | Feb 2013 | A1 |
20130057654 | Rafii et al. | Mar 2013 | A1 |
20130063487 | Spiegel et al. | Mar 2013 | A1 |
20130069876 | Cheng et al. | Mar 2013 | A1 |
20130094329 | Dahl et al. | Apr 2013 | A1 |
20130106692 | Maizels et al. | May 2013 | A1 |
20130107021 | Maizels et al. | May 2013 | A1 |
20130135312 | Yang et al. | May 2013 | A1 |
20130147770 | Dahl et al. | Jun 2013 | A1 |
20130155031 | Dahl et al. | Jun 2013 | A1 |
20130162527 | Dahl | Jun 2013 | A1 |
20130176258 | Dahl et al. | Jul 2013 | A1 |
20130179034 | Pryor | Jul 2013 | A1 |
20130194180 | Ahn et al. | Aug 2013 | A1 |
20130201316 | Binder et al. | Aug 2013 | A1 |
20130216094 | DeLean | Aug 2013 | A1 |
20130236089 | Litvak et al. | Sep 2013 | A1 |
20130335573 | Forutanpour et al. | Dec 2013 | A1 |
20140043598 | Bamji et al. | Feb 2014 | A1 |
20140119599 | Dal Mutto et al. | May 2014 | A1 |
20140173440 | Mutto et al. | Jun 2014 | A1 |
20140211991 | Stoppa et al. | Jul 2014 | A1 |
20140211992 | Stoppa et al. | Jul 2014 | A1 |
20140298273 | Blackstone et al. | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
9749262 | Dec 1997 | WO |
2005091125 | Sep 2005 | WO |
2006011153 | Feb 2006 | WO |
2007052262 | May 2007 | WO |
2006011153 | Oct 2008 | WO |
2008126069 | Oct 2008 | WO |
2007052262 | Apr 2009 | WO |
2008126069 | Apr 2009 | WO |
2009128064 | Oct 2009 | WO |
2009142443 | Nov 2009 | WO |
2009128064 | Jan 2010 | WO |
2010026587 | Mar 2010 | WO |
2010030296 | Mar 2010 | WO |
2010046901 | Apr 2010 | WO |
2010046901 | Aug 2010 | WO |
2010086866 | Aug 2010 | WO |
2010096279 | Aug 2010 | WO |
2010103482 | Sep 2010 | WO |
2010096279 | Nov 2010 | WO |
2010103482 | Nov 2010 | WO |
2011013079 | Feb 2011 | WO |
2011033519 | Mar 2011 | WO |
2011045789 | Apr 2011 | WO |
2012011044 | Jan 2012 | WO |
2012020380 | Feb 2012 | WO |
2012020410 | Feb 2012 | WO |
2012066501 | May 2012 | WO |
2012081012 | Jun 2012 | WO |
2012093394 | Jul 2012 | WO |
2012095756 | Jul 2012 | WO |
2012098534 | Jul 2012 | WO |
2012107892 | Aug 2012 | WO |
2012119633 | Sep 2012 | WO |
2012119885 | Sep 2012 | WO |
2012107892 | Nov 2012 | WO |
2012164562 | Dec 2012 | WO |
2013008236 | Jan 2013 | WO |
2013018099 | Feb 2013 | WO |
2013021385 | Feb 2013 | WO |
2012095756 | Jul 2013 | WO |
2014120554 | Aug 2014 | WO |
Entry |
---|
“0V7740 VGA product brief”, OmniVision, Retrieved from: http://vvww.ovt.com/download—document.php?type=sensor&sensorid=83, 2 pgs. |
“PointGrab Announces New Hand Gesture Control Solution for the Latest Premium Samsung Smart TV Models”, Yahoo! Finance, Retrieved on Apr. 4, 2013, from http://www.finance.yahoo.com/news/pointgrab-announces-hand-gesture-control-22000959.html, 2 pgs. |
Belaroussi, et al., “Comparison of Different Combination Strategies for Face Localization”, Proceedings of the 2006 International Conference on Image Processing, Computer Vision, & Pattern Recognition, Las Vegas, Nevada, Jun. 26-29, 2006, pp. 383-389. |
CANESTA3D, “Canesta 3D ToF Sensor Demo for Living Room”, Youtube, Oct. 28, 2010, Retrieved from: http://www.youtube.com/watch?v=TmKShSHOSYU. |
CANESTA3D, “Canesta PC demo video”, Youtube, Oct. 25, 2010, Retrieved from: http://www.youtube.com/watch?v=I36Aqk1A6vY. |
CANESTA3D, “Canesta TV Gesture User Interface Demo”, Youtube, May 29, 2009, Retrieved from: http://www.youtube.com/watch?v=uR27dPHI7dQ. |
CANESTA3D, “Canesta's latest 3D Sensor—”Cobra“ . . . highest res CMOS 3D depth sensor in the world”, Youtube, Oct. 25, 2010, Retrieved from: http://www.youtube.com/watch?v=5—PVx1NbUZQ. |
CANESTA3D, “Future of Remote Control”, Youtube, Oct. 29, 2009, Retrieved from: http://www.youtube.com/watch?v=vnfdoDHiNil. |
CANESTA3D, “Point Cloud Demo, using Canesta's 320x33 200 3D Tof Image Sensor”, Youtube, Oct. 28, 2010, Retrieved from: http://www.youtube.com/watch?v=4xlXsJuH74c. |
Carmody, Tim, “Why ‘Gorilla Arm Syndrome’ Rules Out Multitouch Notebook Displays”, Wired, Oct. 21, 2010, Retrieved from http://www.wired.com/gadgetlab/2010/10/gorilla-arm-multitouch/, 3 pgs. |
Forsyth, “Computer Vision—A Modern Approach”, Recognition as Template Matching, 46 pgs. |
Hasan et al., “Real Time Fingers and Palm Locating using Dynamic Circle Templates”, International Journal of Computer Applications, vol. 41, No. 6, Mar. 2012, pp. 33-43. |
Kerdvibulvech et al., “Markerless Guitarist Fingertip Detection Using a Bayesian Classifier and a Template Matching for Supporting Guitarists”, Proc.10th Virtual Reality Int. Conf., Apr. 2008, 7 pgs. |
Kolsch et al., “Flocks of Features for Tracking Articulated Objects”, Retrieved from http://www.cs.ucsb.edu/˜mturk/pubs/KolschBook05.pdf, pp. 1-18, Index. |
Lin, John, “Visual Hand Tracking and Gesture Analysis”, Dissertation, University of Illinois at Urbana-Champaign, 2004, 116 pgs. |
Murase et al., “Gesture Keyboard Requiring Only One Camera”, ACM UIST'11, Oct. 16-19, 2011, Santa Barbara, CA, pp. 1-2. |
Nosowitz, “The Second Wave of Gesture-Controlled TVs”, Popular Science, Retrieved on Apr. 4, 2013, from: www.popsci.com/gadgets/article/2012-01/second-wave-gesture-controlled-tvs, 6 pgs. |
Onishi et al., “3D Human Posture Estimation Using HOG Features of Monocular Images”, Pattern Recognition, Peng-Yeng Yin (Ed.), Intech, DOI:10.5772/7541., Oct. 1, 2009, pp. 1-11. |
Rautaray et al., “Vision Based Hand Gesture Recognition for Human Computer Interaction: A Survey”, Artificial Intelligence Review, Springer, Nov. 6, 2012, 54 pgs. |
Thayananthan, “Template-based Pose Estimation and Tracking of 3D Hand Motion”, Dissertation, University of Cambridge, 2005, 172 pgs. |
Zhang, Zhengyou “Flexible Camera Calibration by Viewing a Plane From Unknown Orientations”, Microsoft Research, Redmond, WA, 8 pgs. |
Number | Date | Country | |
---|---|---|---|
20150057082 A1 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
61517657 | Apr 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13506474 | Apr 2012 | US |
Child | 14476666 | US |