The present invention generally relates to the field of electronics. More specifically, the present invention relates to methods, circuits, apparatus and systems for facilitating human interfacing with electronic devices such as personal computers, set-top boxes, smart televisions, general purpose computing platforms, mobile devices, cell phones, Personal Digital Assistants (“PDA”), digital cameras, or any integrated combination of electronic devices.
The present application claims priority from U.S. patent application Ser. No. 13/497,061, which is hereby incorporated by reference in its entirety.
In recent decades, electronic technology, including communication technology, has revolutionized our everyday lives. Electronic devices such as PDA's, cell phones, e-books, notebook computers, mobile media players and digital cameras have permeated the lives of almost every person living in the developed world—and quite a number of people living in undeveloped countries. Mobile communication and computing devices, especially, have become the means by which countless millions conduct their personal and professional interactions with the world. It has become almost impossible for many people, especially those in the business world, who use these devices as a means to improve productivity, to function without access to their electronic devices.
However, with this tremendous proliferation in the use of electronic devices, there has developed a tradeoff between enhanced productivity and simplicity or convenience. As handheld devices evolved to perform more and more tasks, the complexity of the interfaces required to interact which these devices has likewise increased. Many of today's handheld devices come equipped with some variation or another of a full typewriter keyboard. Some devices have fixed keyboards which are electromechanical in nature, while others project a keyboard, a key pad or some variation of either onto a display associated with a touch screen sensor array. Because of the need to keep mobile or handheld devices compact enough to carry around, many of the physical and virtual (i.e. projected keyboards and keypads) implemented on these devices have keys or other interface components which are quite small relative to an average human finger, thus difficult to operate.
Thus, there is a need for improved methods, circuits, apparatus and systems for interfacing with an electronic device.
The present invention includes methods, circuits, devices, systems and associated computer executable code for interacting with a computing platform screen. According to some embodiments, there may be provided a multimode Touchless Human Machine Interface (TLHMI) which may facilitate interaction or interfacing with a computing platform display screen. The TLHMI may also be referred to as a Computing Platform Display Screen Interaction Facilitating System. The multimode TLHMI may be integral or otherwise functionally associated with a computing platform. The TLHMI may be adapted to touchlessly detect, for example through a video camera, the presence, position, orientation and velocity of some or all portions of a subject/person within a detection zone of one or more touchless sensor integral or otherwise functionally associated with the TLHMI. The TLHMI detectable subject portions may include the subject's head, shoulders, torso, legs, feet, arms, hands, fingers and/or objects attached to or being held by the subject. The TLHMI may identify which detected movements, of the one or more subject portions, is intended for interaction with the computing platform, and the TLHMI may track the identified movement. The TLHMI may be adapted to track the position, orientation, velocity and/or gestures of a subject portion which has been identified as intended for interaction with the computing platform. The TLHMI may be include a User Input Generator adapted to generate a computing platform user input signal in response to tracking of the position, orientation, velocity and/or gestures of a subject portion which has been identified as intended for interaction with the computing platform. The TLHMI may be adapted to switch between two or more modes of operation in response to detection or identification of one or more parameters of a tracked subject portion (e.g. hand), wherein identified tracked portion parameters may include speed, direction, position, orientation, motion pattern, or gesture.
According to some embodiments of the present invention, the TLHMI may be integral or functionally associated with one or more touchless sensors, including: (1) image sensors, (2) image sensor arrays, (3) electrostatic sensors, (4) capacitive sensors, (5) inductive sensors, (6) optical gated array sensors, (7) LIDAR based sensors, or (8) any other functionally suited sensor that may touchlessly sense speed, direction, position, orientation, motion pattern, or gesture of a subject portion or implement connected to a subject portion. The touchless sensor may be integral with a computing platform or with a screen of a computing platform. According to some embodiments, the TLHMI may be at least partially in the form of computer executable code running on the computing platform.
According to some embodiments of the present invention, the TLHMI operating in a first mode of operation may generate a first user input signal in response to a given tracked motion, and may generate a second user input signal, different from the first user input signal, in response to the same given tracked motion while operating in a second mode of operation. For example, a transition in a TLHMI mode may alter a ratio of “detected motion” to “pointer element movement deviation”. A TLHMI mode transition may also alter a rendering aspect of one or more elements on a screen. TLHMI mode transition may also alter an order, grouping and/or visibility of one or more elements on a screen. Transitions between a first mode and a second mode of operation may be triggered by detection or identification of motion parameters or detected gesture parameters, such as: (1) subject portion (e.g. hand) speed, (2) subject portion motion direction, (3) subject portion orientation or configuration, and/or (4) predefined mode transitioning gestures.
According to embodiments of the present invention, a TLHMI which is integral or otherwise associated with a computing platform may touchlessly detect and/or track motions or gestures of a computer platform user. In response to the detected/tracked motions or gestures, the TLHMI may generate and present to the computing platform a user input signal (native signal, standard or customized signals) defined within a “Detected Motion to Screen Element Deviation Mapper” (DMSEM) as corresponding to the detected/tracked motions or gestures. Generated user input signal types (also referred to as events) may include: (1) mouse movement or clicking events, (2) touchpad movement or tapping events, (3) keypad or keyboard events, (4) screen scrolling events, and/or (5) any other user input signals or events known today or to be devised in the future.
The computing platform may include graphical user interface logic (GUIL) and circuitry, according to some embodiments, including: (1) Graphical User Interface (GUI) rendering code, (2) Display Drivers, (3) Graphics Processing Unit (GPU), and/or (4) VGA/HDM/DVI out circuits, for generating or rendering video information at least partially indicative of a user's interaction with the computing platform. The GUIL may be adapted to render screen elements such as screen graphics, images, icons, characters, documents, images, video, control elements and user input elements such as pointers, cursors or virtual environment avatars. The GUIL may re-render and move one or more user input elements (e.g. pointer) responsive to the computing platform receiving a user input signal, either through native/conventional user interfaces such as a mouse, a keyboard, touchscreen sensors, etc., or through the TLHMI according to embodiments of the present invention. In response to a TLHMI mode transition, the GUIL may alter one or more rendering aspects of one or more screen elements, such as a user input element (e.g. pointer) or the area around the user input element.
The TLHMI may include or be otherwise functionally associated with a detected motion to screen element deviation mapper (DMSEM) according to embodiments of the present invention. The DMSEM may receive a signal or other indicator indicative of a tracked user motion (direction, magnitude and velocity), and in response to receiving said indicator may: (1) determine or estimate a direction and magnitude by which to move or deviate a screen element such as a user input element (e.g. pointer), and (2) generate and provided to a user input module of the computing platform a user input signal intended to effectuate the user input element deviation. According to some embodiments, the DMSEM may include or be functionally associated with a User Input Generator for generating user input signals conveyed to a user input module of a computing platform.
The DMSEM may use a first ratio of detected motion to user input element deviation while operating in a first mode. For example, the DMSEM may generate user inputs signals intended to move a screen (mouse) pointer by one centimeter for each centimeter of tracked user motion while operating in the first mode. The DMSEM may use a second ratio of detected motion to user input element deviation, different from said first ratio, while operating in a second mode. For example, the DMSEM may generate user inputs signals intended to move a screen (mouse) pointer by one centimeter for each three centimeters of detected user motion while operating in the second mode.
The DMSEM may switch between each of two or more modes, and associated detected motion to user input element deviation ratios, in response to detection and identification of a mode transitioning parameter within a tracked motion or gesture. According to some embodiments, the motion or gesture itself may be a mode transitioning parameter. Detection of a mode transitioning parameter within a tracked motion or gesture may be performed by the DMSEM, and the DMSEM may respond to the detection by initiating TLHMI mode transition. Alternatively, detection of a mode transitioning parameter within a detected motion or gesture may be performed by a module of the TLHMI other than the DMSEM, for example a Mode Transition Detection Module (MTDM). According to embodiments including a MTDM, the MTDM may signal the DMSEM to transition between modes and ratios upon the MTDM identifying a mode transitioning parameter within a tracked motion or gesture. Irrespective of whether the mode transitioning parameter is identified by a discrete-standalone module or by a sub-module of the DMSEM, a module which performs the function of identifying mode transitioning parameters within a tracked motion or tacked gesture may be referred to as a Mode Transition Detection Module (MTDM).
Examples of mode transitioning parameters within detected or tracked motions or gestures may include: (1) speed of tracked subject portion or limb (e.g. hand or object held by subject), (2) direction of motion or tracked subject portion, (3) position or tracked subject portion, (4) orientation of tracked subject portions, (5) configuration of tracked subject portion, and (5) predefined gesture performed or executed by subject portion.
Various detected mode transitioning parameters may trigger anyone of a number of TLHMI operational transitions according to embodiments of the present invention. More specifically, a slowing or a pause in the movement of a tracked subject/user portion (e.g. hand) may be defined as a mode transitioning parameter which may trigger a switch/transition of the TLHMI from a first to a second mode of operation, wherein the second mode of operation may be associated with a higher DMSEM ratio between detected motion (e.g. hand movement) to user interface element (e.g. pointer). That is, a given tracked movement of a hand will result is a smaller pointer deviation in the second mode than in the first mode. Conversely, acceleration of a tracked hand may be defined as a mode transitioning parameter and may cause a mode transitioning event back to the first mode, such that the given tracked movement of a hand will result in a larger pointer deviation than in the second mode.
Alternatively, movement or repositioning of a tracked subject portion from a first region of the detection zone to a second region (e.g. closer or further from the TLHMI or from a screen associated with the computing platform) may be defined as a mode transitioning parameter according to embodiments of the present invention. For example, positioning or movement of a tracked hand closer to a screen may trigger a switch/transition of the TLHMI from a first to a second mode of operation, wherein the second mode may be associated with a higher DMSEM ratio between detected motion (e.g. hand movement) to user interface element (e.g. pointer). That is, a given tracked movement of a hand, while the hand is closer to the screen, may result in a smaller pointer deviation than the same given movement would have caused had the hand been further from the screen. Conversely, positioning or movement of a tracked hand further away from a screen may be defined as a mode transitioning event back to the first mode, such that the given tracked movement of a hand may result in a larger pointer deviation than in the second mode. In more general terms, different regions of a TLHMI detection zone may be associated with different modes of operation of the TLHMI.
Alternatively, specific orientations or configurations of a tracked subject portion (e.g. a hand) may be defined as mode transitioning parameters according to embodiments of the present invention. For example, orientating or configuring a tracked hand in certain orientations (e.g. up or sideways) or in certain configurations (e.g. open palm or closed fist) may trigger a switch/transition of the TLHMI from a first to a second mode of operation. In more general terms, different orientations or configurations of a tracked subject portion may be associated with different modes of operation of the TLHMI.
According to embodiments of the present invention the TLHMI may include or be otherwise functionally associated with Display Augmentation Logic (DAL), which DAL may also be integral or otherwise functionally associated with the computing platform GUIL. In response to a detection or identification of mode transitioning parameters/events within a tracked subject portion (e.g. hand), as previously described or the like, the DAL may transition modes and accordingly may signal or otherwise cause the GUIL to alter a rendering aspect of one or more elements on a screen of the computing platform. For example: (1) upon the TLHMI transitioning from a first to a second mode, the DAL may cause the GUIL to enlarge a screen region around a user interface element (e.g. pointer) whose movement is driven by the TLHMI; (2) upon the TLHMI transitioning from a first to a second mode, the DAL may cause the GUIL to (generate a) frame a screen region around a user interface element (e.g. pointer) whose movement is driven by the TLHMI; (3) upon the TLHMI transitioning from a second to a first mode, the DAL may cause the GUIL to shrink a screen region around a user interface element (e.g. pointer) whose movement is driven by the TLHMI; and (4) upon the TLHMI transitioning from a second to a first mode, the DAL may cause the GUIL to remove a frame from a screen region around a user interface element (e.g. pointer) whose movement is driven by the TLHMI.
According to some embodiments, the TLHMI may be integral of functionally associated with a computing platform having a touchscreen, for example a cell-phone, a smart-phone, e-book, a notebook computer, a tablet computer, etc. According to some embodiments of the present invention, the TLHMI may provide for adaptive touchscreen input functionality such that an aspect of a rendered keyboard, rendered keypad or any other rendered touchscreen input elements or controls such as rendered control keys, control buttons, slide bars, etc. may be altered responsive to a detected mode transitioning parameter or event of a touchlessly tracked subject portion. For example, the adaptive touch-screen input functionality may alter the size, shape or location of input/control elements in proximity of a finger, limb or implement used by a user to touch the screen.
According to some embodiments of the present invention, one or more sensors such as: (1) image sensors, (2) image sensor arrays, (3) electrostatic sensors, (4) capacitive sensors, or (5) any other functionally suited sensor may touchlessly sense a location and/or motion vector of a finger, limb or implement approaching the touch screen. The sensor(s) may provide to the adaptive touchscreen input arrangement an indication of the sensed position or motion vector of the finger/limb/implement relative to the input elements or keys—thereby indicating which input elements or keys are being approached. In response to the indication, a DAL and/or GUIL associated with the touchscreen input arrangement may cause the size, shape or location of input elements/controls within proximity of the sensed finger, limb or implement to be altered, in order to make the input element more prominent (e.g. larger or in a better location) and more easily engagable.
According to yet further embodiments of the present invention, there may be provided a human interface surface (e.g. touchscreen display) comprising presentation and sensing elements. The presentation elements and the sensing elements may be integrated into a single substrate material or may be part of separate substrates which are mechanically attached to one another in an overlapping manner. According to further embodiments of the present invention, there may be provided a controller (e.g. display drive circuit) adapted to send one or more presentation signals to the presentation elements of the human interface surface based at least partially on data stored in a presentation configuration table (e.g. virtual keyboard layout including location and size of keys) and based on a current state of the device. The current state of the device may be determined based on one or more signals received from the sensing elements and/or based on one or more signals received from the device.
According to further embodiments of the present invention, the controller may associate a function or device command signal with each of the one or more signals received from the sensing elements (e.g. when the sensing elements are touched), wherein the association of a command or function may be at least partially based on data from a first data set in the sensing element configuration table. The data selected from the sensing element configuration table may be correlated to data from the presentation configuration used by the controller to send one or more signals to the presentation elements.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.
Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
Embodiments of the present invention may include apparatuses for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the inventions as described herein.
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
Turning now to
While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4376950 | Brown et al. | Mar 1983 | A |
| 5130794 | Ritchey | Jul 1992 | A |
| 5515183 | Hashimoto | May 1996 | A |
| 5691885 | Ward et al. | Nov 1997 | A |
| 5703704 | Nakagawa et al. | Dec 1997 | A |
| 5745719 | Falcon | Apr 1998 | A |
| 5831633 | Van Roy | Nov 1998 | A |
| 5835133 | Moreton et al. | Nov 1998 | A |
| 5852450 | Thingvold | Dec 1998 | A |
| 5909218 | Naka et al. | Jun 1999 | A |
| 6115482 | Sears et al. | Sep 2000 | A |
| 6243106 | Rehg et al. | Jun 2001 | B1 |
| 6303924 | Adan et al. | Oct 2001 | B1 |
| 6317130 | Ishikawa et al. | Nov 2001 | B1 |
| 6388670 | Naka et al. | May 2002 | B2 |
| 6529643 | Loce et al. | Mar 2003 | B1 |
| 6545663 | Arbter et al. | Apr 2003 | B1 |
| 6554706 | Kim et al. | Apr 2003 | B2 |
| 6597801 | Cham et al. | Jul 2003 | B1 |
| 6657670 | Cheng | Dec 2003 | B1 |
| 6674877 | Jojic et al. | Jan 2004 | B1 |
| 6681031 | Cohen et al. | Jan 2004 | B2 |
| 6833843 | Mojaver et al. | Dec 2004 | B2 |
| 6906687 | Werner | Jun 2005 | B2 |
| 7061492 | Carrai et al. | Jun 2006 | B2 |
| 7061532 | Silverstein | Jun 2006 | B2 |
| 7116330 | Marshall et al. | Oct 2006 | B2 |
| 7123292 | Seeger et al. | Oct 2006 | B1 |
| 7184589 | Okubo | Feb 2007 | B2 |
| 7257237 | Luck et al. | Aug 2007 | B1 |
| 7308112 | Fujimura et al. | Dec 2007 | B2 |
| 7366278 | Fu et al. | Apr 2008 | B2 |
| 7429997 | Givon | Sep 2008 | B2 |
| 7755608 | Chang et al. | Jul 2010 | B2 |
| 7783118 | Zhou | Aug 2010 | B2 |
| 7885480 | Bryll et al. | Feb 2011 | B2 |
| 7903141 | Mariano et al. | Mar 2011 | B1 |
| 7936932 | Bashyam et al. | May 2011 | B2 |
| 7978917 | Lei et al. | Jul 2011 | B2 |
| 8005263 | Fujimura et al. | Aug 2011 | B2 |
| 8036494 | Chen | Oct 2011 | B2 |
| 8094873 | Kelusky et al. | Jan 2012 | B2 |
| 8094943 | Eaton et al. | Jan 2012 | B2 |
| 8107726 | Xu et al. | Jan 2012 | B2 |
| 8111284 | Givon | Feb 2012 | B1 |
| 8114172 | Givon | Feb 2012 | B2 |
| 8219936 | Kim et al. | Jul 2012 | B2 |
| 8237775 | Givon | Aug 2012 | B2 |
| 8432390 | Givon | Apr 2013 | B2 |
| 8462199 | Givon | Jun 2013 | B2 |
| 20010007452 | Naka et al. | Jul 2001 | A1 |
| 20020057383 | Iwamura | May 2002 | A1 |
| 20020191239 | Psaltis et al. | Dec 2002 | A1 |
| 20030007680 | Iijima et al. | Jan 2003 | A1 |
| 20040155962 | Marks | Aug 2004 | A1 |
| 20040161133 | Elazar et al. | Aug 2004 | A1 |
| 20040193413 | Wilson et al. | Sep 2004 | A1 |
| 20040228530 | Schwartz et al. | Nov 2004 | A1 |
| 20050023448 | Ogawara et al. | Feb 2005 | A1 |
| 20050041842 | Frakes et al. | Feb 2005 | A1 |
| 20050063596 | Yomdin et al. | Mar 2005 | A1 |
| 20050166163 | Chang et al. | Jul 2005 | A1 |
| 20050232514 | Chen | Oct 2005 | A1 |
| 20050259870 | Kondo et al. | Nov 2005 | A1 |
| 20050271279 | Fujimura et al. | Dec 2005 | A1 |
| 20060010400 | Dehlin et al. | Jan 2006 | A1 |
| 20060056679 | Redert et al. | Mar 2006 | A1 |
| 20060104480 | Fleisher | May 2006 | A1 |
| 20060164230 | DeWind et al. | Jul 2006 | A1 |
| 20060187305 | Trivedi et al. | Aug 2006 | A1 |
| 20060294509 | Mital et al. | Dec 2006 | A1 |
| 20070098250 | Molgaard et al. | May 2007 | A1 |
| 20070183633 | Hoffmann | Aug 2007 | A1 |
| 20070183663 | Wang et al. | Aug 2007 | A1 |
| 20070236475 | Wherry | Oct 2007 | A1 |
| 20070259717 | Mattice et al. | Nov 2007 | A1 |
| 20070285419 | Givon | Dec 2007 | A1 |
| 20070285554 | Givon | Dec 2007 | A1 |
| 20080007533 | Hotelling | Jan 2008 | A1 |
| 20080013793 | Hillis et al. | Jan 2008 | A1 |
| 20080030460 | Hildreth et al. | Feb 2008 | A1 |
| 20080036732 | Wilson et al. | Feb 2008 | A1 |
| 20080037829 | Givon | Feb 2008 | A1 |
| 20080037869 | Zhou | Feb 2008 | A1 |
| 20080101722 | Bryll et al. | May 2008 | A1 |
| 20080104547 | Morita et al. | May 2008 | A1 |
| 20080111710 | Boillot | May 2008 | A1 |
| 20080143975 | Dennard et al. | Jun 2008 | A1 |
| 20080148149 | Singh et al. | Jun 2008 | A1 |
| 20080181499 | Yang et al. | Jul 2008 | A1 |
| 20090058833 | Newton | Mar 2009 | A1 |
| 20090062696 | Nathan et al. | Mar 2009 | A1 |
| 20090080715 | Van Beek et al. | Mar 2009 | A1 |
| 20090116732 | Zhou et al. | May 2009 | A1 |
| 20090141987 | McGarry et al. | Jun 2009 | A1 |
| 20090183125 | Magal et al. | Jul 2009 | A1 |
| 20100066735 | Givon | Mar 2010 | A1 |
| 20100111370 | Black et al. | May 2010 | A1 |
| 20100141802 | Knight et al. | Jun 2010 | A1 |
| 20100194862 | Givon | Aug 2010 | A1 |
| 20100208038 | Kutliroff et al. | Aug 2010 | A1 |
| 20100303290 | Mathe | Dec 2010 | A1 |
| 20100328351 | Tan | Dec 2010 | A1 |
| 20110052068 | Cobb et al. | Mar 2011 | A1 |
| 20110069152 | Wang et al. | Mar 2011 | A1 |
| 20110080496 | Givon | Apr 2011 | A1 |
| 20110129124 | Givon | Jun 2011 | A1 |
| 20110163948 | Givon et al. | Jul 2011 | A1 |
| 20110286673 | Givon et al. | Nov 2011 | A1 |
| 20110292036 | Sali et al. | Dec 2011 | A1 |
| 20120176414 | Givon | Jul 2012 | A1 |
| 20120176477 | Givon | Jul 2012 | A1 |
| 20120218183 | Givon et al. | Aug 2012 | A1 |
| 20130120319 | Givon | May 2013 | A1 |
| Number | Date | Country |
|---|---|---|
| 1 115254 | Jul 2001 | EP |
| 10-040418 | Feb 1998 | JP |
| 2001-246161 | Sep 2001 | JP |
| 2002-216146 | Aug 2002 | JP |
| 2004-062692 | Feb 2004 | JP |
| 2006-040271 | Feb 2006 | JP |
| 2007-531113 | Jan 2007 | JP |
| 2007-302223 | Nov 2007 | JP |
| WO 03025859 | Mar 2003 | WO |
| WO 03039698 | May 2003 | WO |
| WO 2004013814 | Feb 2004 | WO |
| WO 2004094943 | Nov 2004 | WO |
| WO 2005114556 | Dec 2005 | WO |
| WO 2006011153 | Feb 2006 | WO |
| WO 2006099597 | Sep 2006 | WO |
| WO 2008126069 | Oct 2008 | WO |
| WO 2011033519 | Mar 2011 | WO |
| WO 2013069023 | May 2013 | WO |
| Entry |
|---|
| Carranza et al., “Free-Viewpoint Video of 1-39 Human Actors”, Proc. of ACM Siggraph 2003, Jul. 27, 2003. |
| Cheung G K M et al.,“Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture”, Proceedings / 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18-20, 2003, Madison, Wisconsin. |
| Starck et al., “Model-based multiple view reconstruction of people”, Proceedings of the Eight IEEE International Conference on Computer Vision. (ICCV). Nice, France, Oct. 13-16, 2003. |
| Molet T et al: “An animation interface designed for motion capture”, Computer Animation '97 Geneva, Switzerland Jun. 5-6, 1997. |
| Kronrod B et al., “Optimized triangle mesh compression using prediction trees”, Computer Graphics and Applications, 2000. Proceedings. the Eighth Pacific Conference on Hong Kong, China Oct. 3-5, 2000. |
| Theobalt C et al.,: “Enhancing silhouette-based human motion capture with 3D motion fields”, Computer Graphics and Applications, 2003. Proceedings. 11th Pacific Conference on Oct. 8-10, 2003, Piscataway, NJ, USA, IEEE, Oct. 8, 2003. |
| Bregler C et al: “Tracking people with twists and exponential maps”, Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on Santa Barbara, CA, USA Jun. 23-25, 1998, Los Alamitos, CA,USA,IEEE Comput. Soc, US, Jun. 23, 1998, pp. 8-15, XP010291718. |
| Sminchisescu et al. “Estimated Articulated Human Motion with Covariance Scaled Sampling”. Published 2003. |
| Sappa et al. “Monocular 3D Human Body Reconstruction toward Depth Augmentation of Television Sequences”. Published 2003. |
| Sminchisescu et al. “Human Pose Estimation from Silhouettes A Consistent Approach Using Distance Level Set”. Published 2002. |
| Sminchisescu C et al: “Kinematic jump processes for monocular 3D human tracking”, Proceedings / 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18-20, 2003, Madison, Wisconsin; [Proceedings of the IEEE Computer Conference on Computer Vision and Pattern Recognition], Los Alamitos, Calif. [U.A, vol. 1, Jun. 18, 2003, pp. 69-76, XP010644883, DOI: 10.1109/CVPR.2003.1211339 ISBN: 978-0-7695-1900-5. |
| Ren NG, “Digital Light Field Photography”, Jul. 2006, (available at www.lytro.com/rennig-thesis.pdf). |
| D'Apuzzo N et al: “Modeling human bodies from video sequences”, SPIE Proceedings, The International Society for Optical Engineering—SPIE, Bellingham, Washington, USA, vol. 3641, Jan. 1, 1998, pp. 36-47, XP002597223, ISSN: 0277-786X, DOI: 10.1117/12.333796. |
| Number | Date | Country | |
|---|---|---|---|
| 20120218183 A1 | Aug 2012 | US |
| Number | Date | Country | |
|---|---|---|---|
| 61624372 | Apr 2012 | US | |
| 61244136 | Sep 2009 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 13497061 | US | |
| Child | 13468282 | US |