Claims
- 1. A system to enable a user to interact with a virtual input device using a user-controlled object, the system comprising:
a single sensor system that acquires data representing a single image at a given time, from which data three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined such that a location defined on said virtual input device contacted by said user-controlled object is identifiable; and a processor system to determine whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location: wherein said system determines if, when in time, and where interaction between said user-controlled object and said virtual input device occurs.
- 2. The system of claim 1, further including:
means for making available to a companion system information commensurate with contact location determined by said processor system, said companion system including at least one device selected from a group consisting of (i) a PDA, (ii) a wireless telephone, (iii) a cellular telephone, (iv) a set-top box, (v) a mobile electronic device, (vi) an electronic device, (vii) a computer, (viii) an appliance adapted to accept input information, and (ix) an electronic system; wherein by controlling said user-controlled object a user interacts with said virtual input device to provide information to said companion system.
- 3. The system of claim 1, wherein said single sensor system acquires said data using time-of-flight from said single sensor system to a portion of said user-controlled object.
- 4. The system of claim 1, further including feedback to guide said user in positioning said user-controlled object with respect to said virtual input device, said feedback including at least one type of feedback selected from a group consisting of (i) audible feedback, (ii) audible feedback representing information input by said user-controlled object, (iii) audible feedback representing proximity of said user-controlled object to said virtual input device, (iv) audible feedback representing contact location of said user-controlled object on said virtual input device, (v) visual feedback, (vi) visual feedback representing information input by said user-controlled object, (vii) visual feedback including a display representing proximity of said user-controlled object to said virtual input device, and (viii) visual feedback including a display representing contact location of said user-controlled object with said virtual input device.
- 5. The system of claim 1, wherein said virtual input device is a keyboard, and further including feedback to guide said user in positioning said user-controlled object with respect to said keyboard, said feedback including at least one type of feedback selected from a group consisting of (i) audible feedback, (ii) audible enunciation of each virtual key's name when said virtual key contacted by said user-controlled object, (iii) an audible key click sound when a virtual key is contacted by said user-controlled object, (iv) an audible key click sound whose sound varies with mode of operation of a virtual key contacted by said user-controlled object, (v) a display of visual feedback, (vi) a display of visual feedback representing at least one key on said keyboard, (vii) a display of visual feedback representing at least one key on said keyboard and at least a portion of said user-controlled object, (viii) a display of visual feedback representing at least two keys on said keyboard keys wherein a key on said keyboard contacted by said user-controlled object is visually distinguishable from adjacent keys on said keyboard, (ix) a display of visual feedback representing information input by said user-controlled object, and (vii) a display of visual feedback representing an image whose position signifies position of said user-object relative to a virtual key when said virtual input device is a virtual keyboard, and wherein size of said image signifies distance from a lower surface of said user-object to said virtual keyboard.
- 6. The system of claim 1, wherein said virtual input device is a keyboard, and further including a language routine that selects most likely user-intended keystrokes as said user interacts with said keyboard based upon knowledge of language used by said user, based upon recent history of key characters on said keyboard already contacted by said user-controlled object, and based upon knowledge of approximate current proximity of said user-controlled to said keyboard.
- 7. The system of claim 1, wherein said virtual input device is dynamically user-selectable between a keyboard and a digitizer tablet.
- 8. The system of claim 1, further including means for calculating velocity of said user-controlled object at least when proximate said virtual input device;
wherein a contact interaction by said user-controlled object with said virtual input device is adjudicated to occur only if a minimum threshold velocity is exceeded; wherein instances of false interactions are reduced.
- 9. The system of claim 8, wherein said minimum threshold velocity is user-controlled such that reliability of user interaction with said virtual input device is customizable to said user.
- 10. The system of claim 1, further including means for training said user to more efficiently interact with said virtual input device.
- 11. The system of claim 1, wherein said means for training includes at least one of (i) means for providing said user with visual feedback, and (ii) means for providing said user with acoustic feedback.
- 12. The system of claim 1, further including a tool to enable said user to generate a user-customized template of a virtual input device.
- 13. The system of claim 12, wherein said tool enables said user to assign a virtual input device function to a given location defined on said virtual input device.
- 14. The system of claim 1, wherein said processor system can discern user gestures as a form of user interaction with said virtual input device.
- 15. The system of claim 1, further including means for providing a user-viewable image of said virtual input device.
- 16. The system of claim 1, further including an optical system that generates a user-viewable image of said virtual input device.
- 17. The system of claim 1, further including an optical system that includes at least one diffractive optical element, said optical system generating a user-viewable image of said virtual input device.
- 18. The system of claim 1, further including means for operating said system in at least a low power consumption mode and a higher power consumption mode, wherein selection of power consumption mode is made dynamically as a function of time interval between consecutive user interactions with said virtual input device;
wherein power consumed by said system is reduced.
- 19. The system of claim 1, wherein:
said single sensor system captures data in frames representing a single image at a given time from which data said three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined with respect to said virtual input device from at least one of (i) a single data frame, and (ii) multiple data frames captured at substantially the same time such that a location defined on said virtual input device contacted by said user-controlled object is identifiable.
- 20. The system of claim 2, wherein processing tasks associated with operation of said system may be carried out at least in part by a processor associated with said companion system.
- 21. The system of claim 1, wherein:
said virtual input device includes a virtual keyboard; and said user-controlled object includes at least a portion of a hand of said user.
- 22. A method for a user to interact with a virtual input device using a user-controlled object, the method comprising the following steps:
(a) acquiring data representing a single image at a given time from a single sensor system, from which data three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined such that a location defined on said virtual input device contacted by said user-controlled object is identifiable; and (b) processing data acquired at step (a) to determine whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location; wherein said method determines if, when in time, and where interaction between said user-controlled object and said virtual input device occurs.
- 23. The method of claim 22, further including:
(c) making available to a companion system information commensurate with contact location determined at step (b), said companion system including at least one device selected from a group consisting of (i) a PDA, (ii) a wireless telephone, (iii) a cellular telephone, (iv) a set-top box, (v) a mobile electronic device, (vi) an electronic device, (vii) a computer, (viii) an appliance adapted to accept input information, and (ix) an electronic system; wherein by controlling said user-controlled object a user interacts with said virtual input device to provide information to said companion system.
- 24. The method of claim 22, wherein at step (a), said data is acquired using time-of-flight from said single sensor system to a portion of said user-controlled object.
- 25. The method of claim 22, further including providing feedback to guide said user in positioning said user-controlled object with respect to said virtual input device, said feedback including at least one type of feedback selected from a group consisting of (i) audible feedback, (ii) audible feedback representing information input by said user-controlled object, (iii) audible feedback representing proximity of said user-controlled object to said virtual input device, (iv) audible feedback representing contact location of said user-controlled object on said virtual input device, (v) visual feedback, (vi) visual feedback representing information input by said user-controlled object, (vii) visual feedback including a display representing proximity of said user-controlled object to said virtual input device, and (viii) visual feedback including a display representing contact location of said user-controlled object with said virtual input device.
- 26. The method of claim 22, wherein said virtual input device is a keyboard, and further including providing feedback to guide said user in positioning said user-controlled object with respect to said keyboard, said feedback including at least one type of feedback selected from a group consisting of (i) audible feedback, (ii) audible enunciation of each virtual key's name when said virtual key contacted by said user-controlled object, (iii) an audible key click sound when a virtual key is contacted by said user-controlled object, (iv) an audible key click sound whose sound varies with mode of operation of a virtual key contacted by said user-controlled object, (v) a display of visual feedback, (vi) a display of visual feedback representing at least one key on said keyboard, (vii) a display of visual feedback representing at least one key on said keyboard and at least a portion of said user-controlled object, (viii) a display of visual feedback representing at least two keys on said keyboard keys wherein a key on said keyboard contacted by said user-controlled object is visually distinguishable from adjacent keys on said keyboard, (ix) a display of visual feedback representing information input by said user-controlled object, and (vii) a display of visual feedback representing an image whose position signifies position of said user-object relative to a virtual key when said virtual input device is a virtual keyboard, and wherein size of said image signifies distance from a lower surface of said user-object to said virtual keyboard.
- 27. The method of claim 22, wherein said virtual input device is a keyboard, and further including providing a language routine that selects most likely user-intended keystrokes as said user interacts with said keyboard based upon knowledge of language used by said user, based upon recent history of key characters on said keyboard already contacted by said user-controlled object, and based upon knowledge of approximate current proximity of said user-controlled to said keyboard.
- 28. The method of claim 22, wherein said virtual input device is dynamically user-selectable between a keyboard and a digitizer tablet.
- 29. The method of claim 22, further including providing means for calculating velocity of said user-controlled object at least when proximate said virtual input device;
wherein a contact interaction by said user-controlled object with said virtual input device is adjudicated to occur only if a minimum threshold velocity is exceeded; wherein instances of false interactions are reduced.
- 30. The method of claim 29, wherein said minimum threshold velocity is user-controlled such that reliability of user interaction with said virtual input device is customizable to said user.
- 31. The method of claim 22, further including providing means for training said user to more efficiently interact with said virtual input device.
- 32. The method of claim 31, wherein said means for training includes at least one of (i) means for providing said user with visual feedback, and (ii) means for providing said user with acoustic feedback.
- 33. The method of claim 22, further including providing a tool to enable said user to generate a user-customized template of a virtual input device.
- 34. The method of claim 33, wherein said tool enables said user to assign a virtual input device function to a given location defined on said virtual input device.
- 35. The method of claim 22, wherein step (b) includes discerning user gestures as a form of user interaction with said virtual input device.
- 36. The method of claim 22, further including providing a user-viewable image of said virtual input device.
- 37. The method of claim 22, further including providing an optical system that generates a user-viewable image of said virtual input device.
- 38. The method of claim 22, further including providing an optical system that includes at least one diffractive optical element, said optical system generating a user-viewable image of said virtual input device.
- 39. The method of claim 22, further including operating said system in at least a low power consumption mode and a higher power consumption mode, wherein selection of power consumption mode is made dynamically as a function of time interval between consecutive user interactions with said virtual input device;
wherein power consumed by said system is reduced.
- 40. The method of claim 22, wherein step (a) includes capturing data in frames representing a single image at a given time from which data said three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined with respect to said virtual input device from at least one of (i) a single data frame, and (ii) multiple data frames captured at substantially the same time such that a location defined on said virtual input device contacted by said user-controlled object is identifiable.
- 41. The method of claim 23, wherein at step (b), processing tasks associated with operation of said system may be carried out at least in part by a processor associated with said companion system.
- 42. The method of claim 22, wherein:
said virtual input device includes a virtual keyboard; and said user-controlled object includes at least a portion of a hand of said user.
RELATION TO PREVIOUSLY FILED APPLICATION
[0001] This is a continuation of co-pending U.S. utility patent application Ser. No. 09/502,499, filed on Feb. 11, 2000, which will issue as U.S. Pat. No. 6,614,422 on Sep. 2, 2003. The '499 application claimed priority from U.S. provisional patent application, serial No. 60/163,445, filed on Nov. 4, 1999 entitled “Method and Device for 3D Sensing of Input Commands to Electronic Devices”, in which applicants herein were applicants therein. The '499 application also referenced applicant Bamji's then co-pending U.S. patent application Ser. No. 09/401,059 filed on Sep. 22, 1999, entitled “CMOS-COMPATIBLE THREE-DIMENSIONAL IMAGE SENSOR IC”, which '059 application issued as U.S. Pat. No. 6,323,942 on Nov. 27, 2002. Each of these applications and U.S. patents was assigned to common assignee herein Canasta, Inc.
Provisional Applications (1)
|
Number |
Date |
Country |
|
60163445 |
Nov 1999 |
US |
Continuations (1)
|
Number |
Date |
Country |
Parent |
09502499 |
Feb 2000 |
US |
Child |
10651919 |
Aug 2003 |
US |