Methods circuits apparatus and systems for human machine interfacing with an electronic appliance

Information

  • Patent Grant
  • 9218126
  • Patent Number
    9,218,126
  • Date Filed
    Tuesday, September 21, 2010
    15 years ago
  • Date Issued
    Tuesday, December 22, 2015
    9 years ago
Abstract
Disclosed are methods, circuits, apparatus and systems for human machine interfacing with a computational platform or any other electronic device, such as a cell-phone, smart-phone, e-book, notebook computer, tablet computer, etc. According to some embodiments, there may be provided an adaptive touch-screen input arrangement, such as a keyboard, keypad or any other touch screen input arrangements including one or more input elements such as rendered or projected keys or buttons which may be projected onto or rendered on a touch screen display. The adaptive touch-screen input arrangement may be adapted to alter the size, shape or location of input elements within proximity of a finger, limb or implement used by a user to touch the screen.
Description
FIELD OF THE INVENTION

The present invention generally relates to the field of electronics. More specifically, the present invention relates to a methods, circuits, apparatus and systems for facilitating human interface with electronic devices such as mobile devices, cell phones, Personal Digital Assistants (“PDA”), digital cameras, or any integrated combination of electronic devices.


BACKGROUND

In recent decades, electronic technology, including communication technology, has revolutionized our everyday lives. Electronic devices such as PDA's, cell phones, e-books, notebook computers, mobile media players and digital cameras have permeated the lives of almost every person living in the developed world—and quite a number of people living in undeveloped countries. Mobile communication and computing devices especially, have become the means by which countless millions conduct their personal and professional interactions with the world. It has become almost impossible for many people, especially those in the business world, who use these devices as a means to improve productivity, to function without access to their electronic devices.


With this tremendous proliferation in the use of electronic devices, however, there has developed a tradeoff between enhanced productivity and simplicity or convenience. As handheld devices evolved to perform more and more tasks, the complexity of the interfaces required to interact which these devices has likewise increased. Many of today's handheld devices come equipped with some variation or another of a full typewriter keyboard. Some devices have fixed keyboards which are electromechanical in nature, while others project a keyboard, a key pad or some variation of either onto a display associated with a touch screen sensor array. Because of the need to keep mobile or handheld devices compact enough to carry around, many of the physical buttons and/or virtual keys (i.e. projected keyboards and keypads) implemented on these devices have keys or other interface components which are quite small relative to an average human finger, thus difficult to operate.


Thus, there is a need for improved methods, circuits, apparatus and systems for interfacing with an electronic device.


SUMMARY OF THE INVENTION

According to embodiments of the present invention, there are provided methods, circuits, apparatus and systems for human machine interfacing with a computational platform or any other electronic device, such as a cell-phone, smart-phone, e-book, notebook computer, tablet computer, etc. According to some embodiments of the present invention, there may be provided an adaptive touch-screen input arrangement, such as a keyboard, keypad or any other touch screen input arrangement including one or more input elements, such as rendered or projected keys or buttons which may be projected onto or rendered on a touch screen display. The adaptive touch-screen input arrangement may be adapted to alter the size, shape or location of input elements within proximity of a finger, limb or implement used by a user to touch the screen.


According to some embodiment of the present invention, one or more sensors, such as: (1) image sensors, (2) image sensor arrays, (3) electrostatic sensors, (4) capacitive sensors, or (5) any other functionally suited sensor may sense a location and/or motion vector of a finger, limb or implement approaching the touch screen. The sensor(s) may provide to the adaptive touch screen-input arrangement an indication of the sensed position or motion vector of the finger/limb/implement relative to the input elements or keys—thereby indicating which input elements or keys are being approached. In response to the indication, the touch screen input arrangement may alter the size, shape or location of input elements within proximity of the sensed finger, limb or implement in order to make them more prominent (e.g. larger or in a better location) and more easily engagable.


According to further embodiments of the present invention, there may be provided a human interface surface (e.g. touch screen display) comprising presentation and sensing elements. The presentation elements and the sensing elements may be integrated into a single substrate material or may be part of separate substrates which are mechanically attached to one another in an overlapping manner. According to further embodiments of the present invention, there may be provided a controller (e.g. display drive circuit) adapted to send one or more presentation signals to the presentation elements of the human interface surface based, at least partially, on data stored in a presentation configuration table (e.g. virtual keyboard layout including location and size of keys) and based on a current state of the device. The current state of the device may be determined based on one or more signals received from the sensing elements and/or based on one or more signals received from the device.


According to further embodiments of the present invention, the controller may associate a function or device command signal with each of one or more signals received from the sensing elements (e.g. when the sensing element is touched), wherein the association of a command or function may be at least partially based on data from a first data set in the sensing element configuration table. The data selected from the sensing element configuration table may be correlated to data from the presentation configuration used by the controller to send one or more signals to the presentation elements.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:



FIG. 1 shows a block diagram of an exemplary mobile device according to some embodiments of the present invention, including an interface surface and various electric functional blocks to drive the interface surface.





It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, components and circuits have not been described in detail so as not to obscure the present invention.


Unless specifically stated otherwise, as apparent from the following discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing”, “computing”, “calculating”, “determining”, or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulate and/or transform data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.


Embodiments of the present invention may include apparatuses for performing the operations herein. This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs) electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.


The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the desired method. The desired structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the inventions as described herein.


According to embodiments, there may be provided an interface apparatus for an electronic device including an adaptive touch-screen input arrangement adapted to alter a size, position or shape of an input element based on a signal or an indication from a touchless sensor regarding a position or motion vector of a finger, limb or implement. The adaptive touch-screen input arrangement may include a display functionally associated with a graphics processing circuit adapted to render one or more input elements and to project the one or more elements on the display. The apparatus may include a touchless sensor adapted to sense a position or motion vector of a finger, limb or implement in proximity with said display. A signal derived from an output of the touchless sensor may be provided to the graphics processing circuit and may cause the graphics processing circuit to alter a feature of one or more projected interface elements—for example the size of an input element (e.g. a keyboard key projected on the display and its associated touch-screen sensor area) in proximity with a position of a finger, limb or implement may be enlarged. The touchless sensor may be selected from a group of sensors consisting of (1) proximity sensors, (2) image sensors, (3) image sensor arrays, (4) electrostatic sensors, and (5) capacitive sensors. The interface apparatus may be part of a computing device, communication device or any other electronic device known today or to be developed in the future.


Turning now to FIG. 1, there is shown a block diagram of an exemplary mobile device, according to some embodiments of the present invention, including an interface surface and various electric functional blocks to drive and interact with the interface surface or touch-screen assembly. The exemplary device may include a controller 100 adapted to regulate signals to a presentation element driver 300, which presentation element driver 300 may be functionally associated with presentation elements (e.g. Light Emitting Diodes, LCD, etc.) of an interface surface 10. The controller may also be adapted to receive signals from a touch sensing element decoder 400, which decoder 400 is functionally associated with touch sensing elements (e.g. touch sensors, etc.) of the interface surface. The controller may also be adapted to receive finger/limb/implement location or motion indications or information from a touchless sensor 600.


It should be understood that the controller 100 may be a processor, a graphics processor, dedicated control logic or any combination thereof.


A configuration database 200 may include information used by the controller 100 in regulating signal flow to the presentation element driver. As shown in FIG. 1, the configuration database 200 may include such information as interface elements (e.g. buttons or display area) shape and location, display properties, etc., for each of the applications 500A through 500N installed on the device. It should be understood by one of ordinary skill in the art that interface elements such as buttons and displays mentioned above are not physical buttons or displays, but rather virtual elements projected through the presentation surface 10. For each given application 500A through 500N, the configuration database 200 may also include sensing element mapping information corresponding to presentation information/elements associated with given application to specific functions. The controller 100 may use the mapping information to determine which interface element is interacted with (when the screen is touched) by the user and which function/command that interaction is meant to trigger.


The controller may be adapted to alter the size, shape, location or any other feature of any element projected/rendered by the display elements based on a signal or indication provided by the touchless sensor 600 regarding finger/limb/implement location or motion relative to the sensing surface or any of the elements projected onto the sensing surface. The controller may make an input element towards which the finger/limb/implement is approaching more prominent. The controller may also adjust its touch-sensor element to function mapping to correlate with the adjusted projected/displayed input element size, shape or location.


According to some embodiments of the present invention, the touchless sensor 600 may be adapted to determine the position and/or motion of a finger/limb/implement relative to the touch-screen or relative to elements projected/rendered/displayed thereon. The touchless sensor may be part of an image based human machine interface. The image based human machine interface may include one or more image sensors and software running on the controller, on another generable purpose processor or on a dedicated processor. According to further embodiments of the present invention, the sensor 600 may be part of electrostatic sensing arrangement. It should be understood by one of skill in the art that any functionally correlated (i.e. can serve the same function) touchless sensor may be used as part of some embodiments of the present invention.


While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims
  • 1. A user interface apparatus, comprising: a touch-screen input arrangement including a touchscreen and graphics processing circuitry adapted to render touch activated user input elements on said touchscreen, wherein a touch activated user input element is a control element having an associated function triggered by the user physically touching a point on the touchscreen where the element is rendered;one or more image sensors adapted to sense a position or motion vector, relative to said touch activated user input elements, of a user finger, limb or control implement approaching said touch-screen; andmapping information correlating locations of the rendered touch activated user input elements to functions of applications;processing circuitry adapted to determine a given rendered touch activated input element the user finger, limb or control implement is approaching, based on the position or motion vector sensed by said one or more image sensors;a controller adapted to: (1) cause said graphics processing circuitry to facilitate interaction with said given rendered touch activated input element by altering a size shape or location of said given rendered touch activated input element towards the finger, limb or implement, and (2) modify the mapping information to account for the altered size, shape or location of the given rendered touch activated input element.
  • 2. An electronic device, comprising: a processor;a battery;a touch-screen input arrangement including a touchscreen and graphics processing circuitry adapted to render touch activated user input elements on said touchscreen, wherein a touch activated user input element is a control element having an associated function triggered by the user physically touching a point on the touchscreen where the element is rendered;a touchless sensor adapted to sense a motion vector , relative to said touch activated user input elements, of a user finger, limb or control implement approaching said touch-screen; anda controller adapted to: (1) receive from said touchless sensor the motion vector of the user finger, limb or control implement, (2) determine a given rendered touch activated input element the received motion vector is directed towards, and (3) cause said graphics processing circuitry to facilitate interaction with said given rendered touch activated input element by altering a location, size, position or shape of said given rendered touch activated input element;wherein altering the given touch activated input element includes enlarging the given input element.
  • 3. The device according to claim 2, wherein said touchless sensor is selected from the group of sensors consisting of (1) proximity sensors, (2) image sensors, (3) image sensor arrays, (4) electrostatic sensors, and (5) capacitive sensors.
  • 4. A method for human-machine interfacing, said method comprising; providing, upon a touchscreen, a graphic user interface including touch activated user input elements, wherein a touch activated user input element is a control element having an associated function triggered by the user physically touching a point on the touchscreen where the element is rendered;determining, by use of one or more image sensors, a motion vector , relative to the touch activated user input elements, of a user finger, limb or control implement approaching the touch-screen input arrangement;determining, based on the determination of a motion vector of a user finger, limb or implement, a given rendered touch activated input element the motion vector is directed towards;facilitate interaction with the given rendered touch activated input element by altering a location, size, position or shape of the given rendered touch activated input element;wherein altering the given touch activated input element includes enlarging the given input element.
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/IL2010/000791 9/21/2010 WO 00 3/20/2012
Publishing Document Publishing Date Country Kind
WO2011/033519 3/24/2011 WO A
US Referenced Citations (119)
Number Name Date Kind
4376950 Brown et al. Mar 1983 A
5130794 Ritchey Jul 1992 A
5515183 Hashimoto May 1996 A
5691885 Ward et al. Nov 1997 A
5703704 Nakagawa et al. Dec 1997 A
5745719 Falcon Apr 1998 A
5831633 Van Roy Nov 1998 A
5835133 Moreton et al. Nov 1998 A
5852450 Thingvold Dec 1998 A
5909218 Naka et al. Jun 1999 A
6115482 Sears et al. Sep 2000 A
6243106 Rehg et al. Jun 2001 B1
6303924 Adan et al. Oct 2001 B1
6317130 Ishikawa et al. Nov 2001 B1
6388670 Naka et al. May 2002 B2
6529643 Loce et al. Mar 2003 B1
6545663 Arbter et al. Apr 2003 B1
6554706 Kim et al. Apr 2003 B2
6597801 Cham et al. Jul 2003 B1
6657670 Cheng Dec 2003 B1
6674877 Jojic et al. Jan 2004 B1
6681031 Cohen et al. Jan 2004 B2
6833843 Mojaver et al. Dec 2004 B2
6906687 Werner Jun 2005 B2
7061492 Carrai et al. Jun 2006 B2
7061532 Silverstein Jun 2006 B2
7116330 Marshall et al. Oct 2006 B2
7123292 Seeger et al. Oct 2006 B1
7184589 Okubo Feb 2007 B2
7257237 Luck et al. Aug 2007 B1
7308112 Fujimura et al. Dec 2007 B2
7366278 Fu et al. Apr 2008 B2
7429997 Givon Sep 2008 B2
7755608 Chang et al. Jul 2010 B2
7783118 Zhou Aug 2010 B2
7885480 Bryll et al. Feb 2011 B2
7903141 Mariano et al. Mar 2011 B1
7936932 Bashyam et al. May 2011 B2
7978917 Lei et al. Jul 2011 B2
8005263 Fujimura et al. Aug 2011 B2
8036494 Chen Oct 2011 B2
8094873 Kelusky et al. Jan 2012 B2
8094943 Eaton et al. Jan 2012 B2
8107726 Xu et al. Jan 2012 B2
8111284 Givon Feb 2012 B1
8114172 Givon Feb 2012 B2
8237775 Givon Aug 2012 B2
8432390 Givon Apr 2013 B2
8462199 Givon Jun 2013 B2
20010007452 Naka et al. Jul 2001 A1
20020191239 Psaltis et al. Dec 2002 A1
20030007680 Iijima et al. Jan 2003 A1
20040155962 Marks Aug 2004 A1
20040161133 Elazar et al. Aug 2004 A1
20040193413 Wilson et al. Sep 2004 A1
20040228530 Schwartz Nov 2004 A1
20050023448 Ogawara et al. Feb 2005 A1
20050041842 Frakes et al. Feb 2005 A1
20050063596 Yomdin et al. Mar 2005 A1
20050166163 Chang et al. Jul 2005 A1
20050232514 Chen Oct 2005 A1
20050259870 Kondo et al. Nov 2005 A1
20050271279 Fujimura et al. Dec 2005 A1
20060010400 Dehlin et al. Jan 2006 A1
20060056679 Redert et al. Mar 2006 A1
20060104480 Fleisher May 2006 A1
20060148527 Blount Jul 2006 A1
20060161870 Hotelling et al. Jul 2006 A1
20060164230 DeWind et al. Jul 2006 A1
20060187305 Trivedi et al. Aug 2006 A1
20060294509 Mital et al. Dec 2006 A1
20070012349 Gaudiana et al. Jan 2007 A1
20070098250 Molgaard et al. May 2007 A1
20070183633 Hoffmann Aug 2007 A1
20070183663 Wang et al. Aug 2007 A1
20070236475 Wherry Oct 2007 A1
20070259717 Mattice et al. Nov 2007 A1
20070285419 Givon Dec 2007 A1
20070285554 Givon Dec 2007 A1
20080007533 Hotelling Jan 2008 A1
20080013793 Hillis et al. Jan 2008 A1
20080030460 Hildreth et al. Feb 2008 A1
20080036732 Wilson et al. Feb 2008 A1
20080037829 Givon Feb 2008 A1
20080037869 Zhou Feb 2008 A1
20080100572 Boillot May 2008 A1
20080101722 Bryll et al. May 2008 A1
20080104547 Morita et al. May 2008 A1
20080111710 Boillot May 2008 A1
20080143975 Dennard et al. Jun 2008 A1
20080148149 Singh et al. Jun 2008 A1
20080181499 Yang et al. Jul 2008 A1
20080284726 Boillot Nov 2008 A1
20090058833 Newton Mar 2009 A1
20090062696 Nathan et al. Mar 2009 A1
20090080715 Van Beek et al. Mar 2009 A1
20090116732 Zhou et al. May 2009 A1
20090141987 McGarry et al. Jun 2009 A1
20100005427 Zhang et al. Jan 2010 A1
20100066735 Givon Mar 2010 A1
20100111370 Black et al. May 2010 A1
20100141802 Knight et al. Jun 2010 A1
20100194862 Givon Aug 2010 A1
20100208038 Kutliroff et al. Aug 2010 A1
20100295799 Nicholson et al. Nov 2010 A1
20100303290 Mathe Dec 2010 A1
20100328351 Tan Dec 2010 A1
20110045812 Kim et al. Feb 2011 A1
20110052068 Cobb et al. Mar 2011 A1
20110069152 Wang et al. Mar 2011 A1
20110080496 Givon Apr 2011 A1
20110129124 Givon Jun 2011 A1
20110163948 Givon et al. Jul 2011 A1
20110286673 Givon et al. Nov 2011 A1
20110292036 Sali et al. Dec 2011 A1
20120176414 Givon Jul 2012 A1
20120176477 Givon Jul 2012 A1
20120218183 Givon et al. Aug 2012 A1
20130120319 Givon May 2013 A1
Foreign Referenced Citations (18)
Number Date Country
1 115254 Jul 2001 EP
10-040418 Feb 1998 JP
2001-246161 Sep 2001 JP
2002-216146 Aug 2002 JP
2004-062692 Feb 2004 JP
2006-040271 Feb 2006 JP
2007-531113 Jan 2007 JP
2007-302223 Nov 2007 JP
WO 03025859 Mar 2003 WO
WO 03039698 May 2003 WO
WO 2004013814 Feb 2004 WO
WO 2004094943 Nov 2004 WO
WO 2005114556 Dec 2005 WO
WO 2006011153 Feb 2006 WO
WO 2006099597 Sep 2006 WO
WO 2008126069 Oct 2008 WO
WO 2011033519 Mar 2011 WO
WO 2013069023 May 2013 WO
Non-Patent Literature Citations (13)
Entry
Carranza et al., “Free-Viewpoint Video of 1-39 Human Actors”, Proc. of ACM SIGGRAPH 2003, Jul. 27, 2003.
Cheung G K M et al.,“Shape-from-silhouette of articulated objects and its use for human body kinematics estimation and motion capture”, Proceedings / 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18-20, 2003, Madison, Wisconsin.
Starck et al., “Model-based multiple view reconstruction of people”, Proceedings of the Eight IEEE International Conference on Computer Vision. (ICCV). Nice, France, Oct. 13-16, 2003.
Molet T et al: “An animation interface designed for motion capture”, Computer Animation '97 Geneva, Switzerland Jun. 5-6, 1997.
Kronrod B et al., “Optimized triangle mesh compression using prediction trees”, Computer Graphics and Applications, 2000. Proceedings. the Eighth Pacific Conference on Hong Kong, China Oct. 3-5, 2000.
Theobalt C et al.,: “Enhancing silhouette-based human motion capture with 3D motion fields”, Computer Graphics and Applications, 2003. Proceedings. 11th Pacific Conference on Oct. 8-10, 2003, Piscataway, NJ, USA, IEEE, Oct. 8, 2003.
Bregler C et al: “Tracking people with twists and exponential maps”, Computer Vision and Pattern Recognition, 1998. Proceedings. 1998 IEEE Computer Society Conference on Santa Barbara, CA, USA Jun. 23-25, 1998, Los Alamitos, CA,USA,IEEE Comput. Soc, US, Jun. 23, 1998, pp. 8-15, XP010291718.
Sminchisescu et al. “Estimated Articulated Human Motion with Covariance Scaled Sampling”. Published 2003.
Sappa et al. “Monocular 3D Human Body Reconstruction toward Depth Augmentation of Television Sequences”. Published 2003.
Sminchisescu et al. “Human Pose Estimation from Silhouettes a Consistent Approach Using Distance Level Set”. Published 2002.
Sminchisescu C et al: “Kinematic jump processes for monocular 3D human tracking”, Proceedings / 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Jun. 18-20, 2003, Madison, Wisconsin; [Proceedings of the IEEE Computer Conference on Computer Vision and Pattern Recognition], Los Alamitos, Calif. [U.A, vol. 1, Jun. 18, 2003, pp. 69-76, XP010644883, DOI: 10.1109/CVPR.2003.1211339 ISBN: 978-0-7695-1900-5.
Ren Ng, “Digital Light Field Photography”, Jul. 2006, (available at www.lytro.com/rennig-thesis.pdf).
D'Apuzzo N et al: “Modeling human bodies from video sequences”, SPIE Proceedings, The International Society for Optical Engineering—SPIE, Bellingham, Washington, USA, vol. 3641, Jan. 1, 1998, pp. 36-47, XP002597223, ISSN: 0277-786X, DOI: 10.1117/12.333796.
Related Publications (1)
Number Date Country
20120176414 A1 Jul 2012 US
Provisional Applications (1)
Number Date Country
61244136 Sep 2009 US