Claims
- 1. An electronic input device comprising:
a sensor system capable of providing information for approximating a position of an object contacting a surface over an active sensing area; and a projector capable of displaying an image onto a projection area on the surface, wherein the image indicates one or more input areas where placement of an object is to have a corresponding input; and wherein at least one of the sensor system and the projector are oriented so that the image appears within an intersection of the active sensing area and the projection area.
- 2. The electronic input device of claim 1, further comprising:
a processor coupled to the sensor system, wherein in response to the object contacting the surface within any of the one or more input areas, the processor is configured to use the information provided from the sensor system to approximate the position of the object contacting the surface so that the input area contacted by the object can be identified.
- 3. The electronic input device of claim 1, wherein the sensor system comprises a sensor light to direct light over the surface, and a light detecting device to capture the directed light reflecting off of the object, wherein the sensor light directs light over a first area of the sensor and the light detecting device detects light over a second area of the surface, and wherein the active sensing area is formed by an intersection of the first area and the second area.
- 4. The electronic input device of claim 2, wherein the processor is configured to identify an input value from the identified input area contacted by the object.
- 5. The electronic input device of claim 3, wherein the light detecting device identifies a pattern captured from the light reflecting off the object, the pattern being measurable to indicate the approximate position of the object contacting the surface.
- 6. The electronic input device of claim 1, wherein the one or more input areas indicated by the image include a set of keys, and wherein each key corresponds to one of the input regions.
- 7. The electronic input device of claim 1, wherein the one or more input areas indicated by the image include a set of alphanumeric keys.
- 8. The electronic input device of claim 7, wherein the set of alphanumeric keys correspond to a QWERTY keyboard.
- 9. The electronic input device of claim 1, wherein the one or more input areas indicated by the image include one or more input areas corresponding to an interface that operates as one or more of a mouse pad region, handwriting recognition area, and a multi-directional pointer.
- 10. The electronic input device of claim 1, wherein the projector is configured to reconfigure the image to change the one or more input areas that are displayed.
- 11. An electronic input device comprising:
a sensor system capable of providing information for approximating a position of an object contacting a surface over an active sensing area; and a projector capable of displaying a keyboard onto a projection area on the surface, wherein the keyboard indicates a plurality of keys where placement of an object is to have a corresponding input; and wherein at least one of the sensor system and the projector are oriented so that the keyboard appears within an intersection of the active sensing area and the projection area.
- 12. The electronic input device of claim 11, further comprising:
a processor coupled to the sensor system, wherein in response to the object contacting the surface within any area designated by one of the plurality of keys, the processor uses the information to approximate the position of the object contacting the surface so that a selected key is determined from the plurality keys, the selected key corresponding to the area contacted by the object.
- 13. The electronic input device of claim 11, wherein the sensor system comprises a sensor light to direct light over the surface, and a light detecting device to capture the directed light reflecting off of the object, wherein the sensor light directs light over a first area of the sensor and the light detecting device detects light over a second area of the surface, and wherein the active sensing area is formed by an intersection of the first area and the second area.
- 14. The electronic input device of claim 12, wherein the processor identifies an input value from the selected key.
- 15. The electronic input device of claim 13, wherein the light detecting device identifies a pattern captured from the light reflecting off the object, the pattern being measurable to indicate the approximate position of the object contacting the surface at the selected key.
- 16. The electronic input device of claim 11, wherein the keyboard is a QWERTY keyboard.
- 17. The electronic input device of claim 11, wherein the projector delineates individual keys in the plurality of keys by shading at least a portion of each of the individual keys.
- 18. The electronic device of claim 11, wherein the projector delineates individual keys in the plurality of keys by shading only a portion of a border for each of the individual keys.
- 19. The electronic device of claim 18, wherein the projector shades the portion of the border for each of the individual keys forming the keyboard along a common orientation.
- 20. The electronic device of claim 11, wherein a position where the keyboard is displayed is based on a designated dimension of the keyboard, wherein the position is determined by a region of the intersection area that is closest to the sensor system and can still accommodate the size of the keyboard.
- 21. The electronic device of claim 11, wherein a size of the keyboard is based on a designated position of the keyboard, wherein the size of the keyboard is based at least in part on a width of the keyboard fitting within the intersection area at the position where the keyboard is to be displayed.
- 22. The electronic device of claim 21, wherein a depth-wise dimension of the keyboard is designated, and wherein a width of the keyboard is approximately a maximum that can fit within the intersection area at the position where the keyboard is to be displayed.
- 23. The electronic device of claim 22, wherein a shape of the keyboard is conical.
- 24. The electronic device of claim 22, wherein a shape of the keyboard is conical so that a maximum width-wise dimension of the keyboard is at least 75% of a width-wise dimension of the intersection area at a depth where the maximum width-wise dimension of the keyboard occurs.
- 25. The electronic device of claim 22, wherein a shape of the keyboard is conical so that a maximum width-wise dimension of the keyboard is at least 90% of a width-wise dimension of the intersection area at a depth where the maximum width-wise dimension of the keyboard occurs.
- 26. The electronic device of claim 11, wherein the projector delineates individual keys in the plurality of keys by shading at least a portion of each of the individual keys, and wherein at least a first key in the plurality of the keys is delineated from one or more other keys adjacent to that key by a projected dotted lines.
- 27. The electronic device of claim 16, wherein a set of keys having individual keys that are not marked as being one of the alphabet characters are positioned furthest away from the sensor system along a depth-wise direction.
- 28. The electronic device of claim 11, wherein the projector projects at least some of the keyboard using a gray scale light medium.
- 29. The electronic device of claim 11, wherein the plurality of keys include one or more occlusion keys that can form two-key combinations with other keys in the plurality of keys, and wherein the plurality of keys are arranged so that the selection of anyone of the other keys does not preclude the sensor system from detecting that one of the one or more occlusion keys is concurrently selected.
- 30. The electronic input device of claim 12, wherein the projector displays a region along with the keyboard, the region being designated for the sensor system to detect a placement and movement of an object within the region.
- 31. The electronic device of claim 30, wherein the processor interprets a movement of the object from a first position within the region to a second position within the region as an input.
- 32. A method for providing an input interface for an electronic device, the method comprising:
identifying a projection area of a projector on a surface, the projection area corresponding to where an image provided by the projector of an input interface with one or more input areas can be displayed; identifying an active sensor area of a sensor system on the surface, the sensor system being in a cooperative relationship with the projector, the active sensor area corresponding to where a sensor system is capable of providing information for approximating a position of an object contacting the surface; and causing the image of the interface to be provided within a boundary of an intersection of the projection area and the active sensor area.
- 33. The method of claim 32, further comprising:
approximating a position of an object contacting one of regions of the interface using information provided from the sensor system.
- 34. The method of claim 32, further comprising:
projecting a keyboard using the projector on the intersection of the active sensor area and the projection area; and determining a key in the keyboard selected by a user-controlled object contacting the surface by approximating a position of the object contacting one of the regions of the keyboard using information provided from the sensor system.
- 35. The method of claim 34, wherein identifying an active sensor area of a sensor system on the surface includes identifying a first area on the surface where a sensor light of the sensor system can be directed, and identifying a second area where a light detecting device of the sensor system is operable, wherein the active sensor area corresponds to an intersection of the first area and the second area.
- 36. The method of claim 33, wherein causing the image of the interface to be provided within a boundary of an intersection of the projection area and the active sensor area includes fitting the image of the interface into the intersection area at a given depth from the electronic device.
- 37. The method of claim 36, wherein fitting the image of the interface into the intersection area at a given depth from the electronic device includes determining a maximum dimension of the interface based on a span of the intersection area in a region of the intersection area that is to provide the input interface.
- 38. The method of claim 36, fitting the image of the interface into the intersection area at a given depth from the electronic device includes tapering a shape of the input interface based on a span of the intersection area in a region of the intersection area that is to provide the input interface.
- 39. The method of claim 36, wherein causing the image of the interface to be provided within a boundary of an intersection of the projection area and the active sensor area includes positioning the input interface within a region of the intersection that can accommodate a designated size of the input interface.
- 40. A method for providing a light-generated input interface, the method comprising:
converting a representation of a specified configuration for the light-generated input interface into a first form for use by a projector; converting the representation of the configuration for the light-generated input interface into a second form for use by a sensor system; and causing the light-generated input interface to be projected onto a surface to have the specified configuration of the representation.
- 41. The method of claim 40, wherein converting a representation of a specified configuration includes receiving a computerized illustration of the specified configuration.
- 42. The method of claim 40, wherein converting a representation of a specified configuration for the light-generated input interface into a first form for use by a projector includes converting the representation into a bitmap file, and wherein the method further comprises converting the projector using the bitmap file.
- 43. The method of claim 40, wherein converting the representation of the configuration for the light-generated input interface into a second form for use by a sensor system includes converting the representation into a set of machine-readable configuration data, and wherein the method further comprises the step of converting the sensor system using the set of machine-readable configuration data.
- 44. The method of claim 40, wherein the specified configuration specifies an arrangement of keys for an image of a keyboard.
- 45. The method of claim 44, wherein the specified configuration specifies a position of a mouse pad region that is to be displayed with the keyboard.
- 46. The method of claim 44, wherein the keyboard is in a QWERTY form.
- 47. The method of claim 40, further comprising the step of:
identifying a plurality of distinct regions specified by the representation; and identifying a property specified for each of the plurality of distinct regions.
- 48. The method of claim 47, wherein the step of converting the representation of the configuration into a second form includes assigning a first region in the plurality of distinct regions to a first property specified for that first region.
- 49. The method of claim 48, wherein assigning a first region in the plurality of distinct regions to a first property specified for that first region includes identifying a type of contact by the object on the first region that is to be interpreted as an input.
- 50. The method of claim 49, wherein identifying a type of contact by the object on the first region includes identifying whether one or more of a movement, a single-tap, or a double-tap is to be interpreted as the input.
- 51. The method of claim 48, further comprising assigning a first region in the plurality of distinct regions to a first input value.
- 52. A method for providing a light-generated input interface, the light-generated input interface including a projector for projecting an image of the input interface, and a sensor system to detect user interaction with the input interface, the method comprising:
receiving an output file from a diffractive optical element of the projector, the output file providing information about an image of the input interface that is to appear on a surface; creating a simulated image of the input interface based on the information provided by the output file; editing the simulated image; and converting the edited simulated image into a form for configuring the projector.
- 53. The method of claim 52, wherein editing the simulated image includes automatically editing the image by comparing a desired image of the interface to the simulated image of the input interface.
- 54. The method of claim 52, further comprising filtering the information contained in the output file in order to perform the step of creating a simulated image.
- 55. The method of claim 52, further comprising using the information contained in the output file to generate a new output file having coordinates of pixels that are either lit or unlit.
- 56. The method of claim 52, wherein editing the image includes altering a state of selected individual pixels.
RELATED APPLICATION AND PRIORITY INFORMATION
[0001] This application claims benefit of priority to Provisional U.S. Patent Application No. 60/340,005, entitled “Design For Projected 2-Dimensional Keyboard,” filed Dec. 7, 2001; to Provisional U.S. Patent Application No. 60/424,095, entitled “Method For Creating A Useable Projection Keyboard Design,” filed Nov. 5, 2002; and to Provisional U.S. Patent Application No. 60/357,733, entitled “Method and Apparatus for Designing the Appearance, and Defining the Functionality and Properties of a User Interface for an Input Device”, filed Feb. 15, 2002. All of the aforementioned priority applications are hereby incorporated by reference in their entirety for all purposes.
Provisional Applications (3)
|
Number |
Date |
Country |
|
60340005 |
Dec 2001 |
US |
|
60424095 |
Nov 2002 |
US |
|
60357733 |
Feb 2002 |
US |