Recent developments in the field of multi-touch inputs for personal computer provide improved input capabilities for computer application programs. Along with the innovation of the touch screen, the multi-finger, gesture-based touchpad provides considerably improved productivity when used as an input device over standard input devices such as conventional mice.
Currently, the standard touchpad installed on keyboards and remote controllers is a single-touch sensor pad. Despite its standard usage, the single-touch sensor pad has inherent difficulty in generating multi-touch inputs or intuitive multi-dimensional input commands.
Accordingly, a need exists for a single-touch sensor pad that has equivalent multi-touch input capability to a multi-touchpad or other multi-dimensional input devices.
The present invention has been developed in response to problems and needs in the art that have not yet been fully resolved by currently available touchpad systems and methods. Thus, these systems and methods are developed to use a single-touch sensor pad combined with an imaging sensor to provide a multi-touch user interface. These systems and methods can be used to control conventional 2-D and 3-D software applications. These systems and methods also allow for multi-dimensional input command generation by two hands or fingers of a user on a single touchpad. The systems and methods also provide input commands made simply by hovering the user's fingers above the touchpad surface.
Implementations of the present systems and methods provide numerous beneficial features and advantages. For example, the present systems and methods can provide a dual-input mode, wherein, for instance, in a first mode, a multi-touch command can be generated by making a hand gesture on a single-touch sensor pad. In the second mode, a multi-touch input can be generated by making a hand gesture in free space. In operation, the systems and methods can operate in a first input mode when the single-touch sensor pad senses a touchpoint from a user's finger on the single-touch sensor pad. The system can switch to the second input mode when the single-touch sensor pad senses the absence of a touchpoint from a user's finger on the single-touch sensor pad.
In some implementations of the system, by using data fusion, the present systems and methods can significantly reduce the computational burden for multi-touch detection and tracking on a touchpad. At the same time, a manufacturer can produce the system using a low-cost single-touch sensor pad, rather than a higher-cost multi-touch sensor pad, while still providing multi-touchpad capabilities. The resulting system can enable intuitive input commands that can be used, for example, for controlling multi-dimensional applications.
One aspect of the invention incorporates a system for generating a multi-touch command using a single-touch sensor pad and an imaging sensor. The imaging sensor is disposed adjacent to the single-touch sensor pad and captures one or more images of a user's fingers on or above the single-touch sensor pad. The system includes firmware that acquires data from the single-touch sensor pad and uses that data with the one or more images from the imaging sensor. Using the acquired data, the firmware can generate a multi-touch command.
Another aspect of the invention involves a method for generating a multi-touch command with a single-touch sensor pad. The method relates to acquiring data from a single-touch sensor pad that indicates whether or not a user is touching sensor pad and where. The method also relates to acquiring images of the user's fingers from an imaging sensor. Firmware of the system can then use the acquired information and images to identify the user's hand gesture and then generate a multi-touch command corresponding on this hand gesture.
These and other features and advantages of the present invention may be incorporated into certain embodiments of the invention and will become more fully apparent from the following description and appended claims, or may be learned by the practice of the invention as set forth hereinafter. The present invention does not require that all the advantageous features and all the advantages described herein be incorporated into every embodiment of the invention.
In order that the manner in which the above-recited and other features and advantages of the invention are obtained will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments thereof, which are illustrated in the appended drawings. These drawings depict only typical embodiments of the invention and are not therefore to be considered to limit the scope of the invention.
The presently preferred embodiments of the present invention can be understood by reference to the drawings, wherein like reference numbers indicate identical or functionally similar elements. It will be readily understood that the components of the present invention, as generally described and illustrated in the figures herein, could be arranged and designed in a wide variety of different configurations. Thus, the following more detailed description, as represented in the figures, is not intended to limit the scope of the invention as claimed, but is merely representative of presently preferred embodiments of the invention.
The following disclosure of the present invention may be grouped into subheadings. The utilization of the subheadings is for convenience of the reader only and is not to be construed as limiting in any sense.
The description may use perspective-based descriptions such as up/down, back/front, left/right and top/bottom. Such descriptions are merely used to facilitate the discussion and are not intended to restrict the application or embodiments of the present invention.
For the purposes of the present invention, the phrase “A/B” means A or B. For the purposes of the present invention, the phrase “A and/or B” means “(A), (B), or (A and B).” For the purposes of the present invention, the phrase “at least one of A, B, and C” means “(A), (B), (C), (A and B), (A and C), (B and C), or (A, B and C).”
Various operations may be described as multiple discrete operations in turn, in a manner that may be helpful in understanding embodiments of the present invention; however, the order of description should not be construed to imply that these operations are order dependent.
The description may use the phrases “in an embodiment,” or “in various embodiments,” which may each refer to one or more of the same or different embodiments. Furthermore, the terms “comprising,” “including,” “having,” and the like, as used with respect to embodiments of the present invention, are synonymous with the definition afforded the term “comprising.”
The present input systems and methods can detect the 2-D coordinates of multiple fingertips on a single-touch sensor pad (or simply “touchpad”) and image data (or simply “images”) from an imaging sensor. The present systems and methods utilize a single-touch sensor pad that can report the 2-D coordinates Pav, where Pav=(Xav, Yav), of an average touchpoint of multiple touchpoints when a user places two or more fingertips on the surface of the single-touch sensor pad. To compute correct 2-D coordinates of each fingertip, the present systems and methods use the 2-D coordinates, Pav, of an average touchpoint in combination, or fused, with image data captured from an imaging sensor. Data fusion refers generally to the combined data from multiple sources in order to identify inferences. In the present systems and methods data fusion relates to the combination of data from the touchpad 20 and the imaging sensor 22 to more efficient and narrowly identify the location of fingers that if they were identified separately. Using data fusion, the present systems and methods can determine the 2-D location of each fingertip (or touchpoint) on the surface of touchpad.
The imaging sensor 22 can be low-resolution, black-and-white imaging sensor 22 configured for data fusion purposes (e.g., a CMOS sensor with CGA resolution of 320×200 black and white pixel). The imaging sensor 22 is mounted on a keyboard 24 adjacent the touchpad in a manner that allows a sensor camera 28 of the imaging sensor 22 to capture images of user's finger on the surface of touchpad 20 or capture a user's finger in free space above the touchpad 20 and/or imaging sensor 22. In some embodiments, the angle of sensor camera 28 of the imaging sensor 22 can be movable in order to change a camera angle (including both the vertical and horizontal angle of orientation) of the sensor camera. The movement of the sensor camera 28 can be automatic or manual. For example, the sensor camera 28 can sense the location of a user's hand 30 and automatically adjust its orientation toward the user's hand 30. The movement of the sensor camera 28 is represented in
As an optional design feature, a light 26, such as a small LED light, can be installed on the keyboard 24 adjacent to the touchpad 20 to provide light to the touchpad 20 area and the area above the touchpad 20 and/or above the imaging sensor 22. Thus, in some configurations, the light 26 is configured to illuminate at least the touchpad 20 and a portion of a user's fingers when the user's fingers are in contact with the touchpad 20. Some embodiments may benefit by providing a movable light that can move manually or automatically to change the angle of illumination along two or more planes.
In the data processing in the second logical device 74, the firmware 70 acquires a data from the touchpad 20 that identifies the presence or absence of a touchpoint on the touchpad 20 and the position or coordinates of the touchpoint if there is a touchpoint. The firmware 70 also acquires images from the imaging sensor 22. The acquired images can be acquired as data representing a pixilated image. Using this acquired data, the firmware 70 can identify a hand gesture made by the user's one or more fingers and generate a multi-touch command based on the identified hand gesture. The final, output from the second logical device 74 is in the same format as that of a multi-touch sensor pad. The third logical device 76 of the firmware 70 can perform real-time template tracking calculations to identify the 3-D location and orientation of an object corresponding to the user's finger-hand in a free space. This third logical device can operate independent of the second logical device when the user's hand is not touching the touchpad 20. Additional functions of the firmware 70 will be described below.
The following description explains the process of identifying a multi-touch location using a data fusion algorithm within the firmware 70. As background,
An explanation of a data fusion algorithm used to compute the actual location of each touchpoint on the touch pad 20 will now be provided. Initially, the firmware 70 acquires an average touchpoint (X, Y), as illustrated in
The firmware 70 can then iterate through the following steps. After the average touchpoint (X, Y) is acquired, it is mapped onto a pixel coordinate system, as shown in
Next, once the edges of the fingers are identified, the firmware 70 can detect the number of fingers in the image and thus the number of touchpoints on the touchpad 20. The firmware can also use the coordinate system to measure the distance between the finger tips depicted in the image, which can be used to detect the distance between the touchpoints. In case of two touchpoints, the detected distance between the coordinates of the two touchpoints can be given values, Dx and Dy, as shown in
Next, the firmware 70 can identify the coordinates of the two or more actual touchpoints. For example, when two touchpoints are detected, the firmware 70 can compute the coordinates of the first touchpoint (X1, Y1) and the second touchpoint (X2, Y2) using the known values of (X, Y), Dx, and Dy, and the following equations:
X
1
=X−Dx/2; Y1=Y−Dy/2;
X
2
=X+Dx/2; Y2=Y+Dy/2;
Lastly, if the data sequence of a set of subsequent touchpoint coordinates results in one or more jerky movements, then this set of touchpoint coordinates can be smoothed out by filtering them with a filter, such as a digital low pass filter or other suitable filter.
As noted, the image processing for the second logical device 74 of the firmware 70 does not adopt a typical image processing method for tracking touchpoints, such as a real-time template (object shape) tracking algorithm. These typical methods require heavy computational power on microprocessor 64. The present methods can reduce the computational load on the microprocessor 64 by scanning a one dimensional pixel line adjacent to the averaged touchpoint mapped onto the imaging sensor's pixel coordinates to estimate the distance between fingertips. Accordingly, the method of data fusion using the averaged touchpoint from touchpad 20 and partial pixel data from imaging sensor 22 can provide a significantly reduced computational burden on the microprocessor 64 compared with traditional real-time image processing methods.
As mentioned, the fusion of data from the touchpad 20 and the imaging sensor 22 can be used to generate multi-touch commands. When using data fusion to generate multi-touch commands, both the touchpad 20 and the imaging sensor 22 are used as primary inputs and independently utilized for input command generation. A real-time, template-tracking algorithm can also be used by the firmware 70.
For example,
Continuing the example,
In some embodiments, the present systems and methods provide a multi-touch input gesture that is generated by a finger hovering gesture in the proximity of the surface of the touchpad 20. As shown in
Thus configured, the imaging sensor 22 can detect not only the 2-D finger positions of the fingers 32, 34 on the local X-Y coordinates on the touchpad 20, but also the vertical distance (along the Z-axis) between user's fingertips and the surface of touchpad 20. The data relating to the fingertip positions in proximity to the touchpad 20 can be used for Z-axis related commands such as Z-axis translation or creation of multiple modal controls for multi-finger, gesture-based input commands.
In some configurations, the imaging sensor 22 is tuned to identify both the local X-Y position of fingers 32, 34 on and above the touchpad 20 and a hovering distance of the fingers 32, 34 above the touchpad 20. This identification can be made by comparing sequential image frames (e.g., the current and previous image frames), such as the image frames of
When a user's finger contacts the surface of touchpad 20, the absolute location of the touchpoint is identified by data fusion, as previously described. However, after the user's fingers 32, 34 are lifted to hover over the touchpad 20 surface, data fusion may not be able to identify the exact 2-D location of fingers 32, 34. In these instances, the imaging sensor 22 can estimate the position change on X-axis using the comparison of captured frame image between previous frame and current frame. For example,
In the example depicted in
In some embodiments, the firmware 70 can also detect the Y-axis (forward/backward movement) of a finger that is hovering over the touchpad 20. In these embodiments, the firmware 70 and/or imaging sensor 22 can utilize the same method depicted in
As will be understood from the foregoing, the present systems and methods can be used to generate multi-touch commands from hand gestures made both on the surface of the touchpad 20 and made while hovering the fingers over the surface of the touchpad 20. Examples of multi-touch commands made while contacting the touchpad 20 can include scrolling, swiping of web pages, zooming of text image, rotating pictures, and the like. Similarly, multi-touch commands can be made by hovering the fingers over the touchpad 20. For example, moving a hovered finger in a right/left direction can signal an X-axis translation. In another example, moving a hovered finger forward/backward direction can signal a Y-axis translation. In other examples, moving two hovered fingers in the right/left direction can signal a yaw command (rotation about Y-axis), while moving two hovered fingers forward/backward can signal a pitch command (rotation about X-axis). In a specific instance, commands made by hovering a finger can provide the commands for camera view change of 3-D map, such as Google Earth.
In some configurations, hand gestures made on the surface of the touchpad 20 trigger a first command mode, while hand gestures made while hovering one or more fingers over the touchpad 20 trigger a second command mode. In some instances, these two modes enable a dual-mode system that can receive inputs while a user makes hand gestures on and above a touchpad 20. Thus, the user can touch the touchpad 20 and hover fingers over the touchpad 20 and/or the imaging sensor 22 to provide inputs to a software program.
The present invention may be embodied in other specific forms without departing from its structures, methods, or other essential characteristics as broadly described herein and claimed hereinafter. The described embodiments are to be considered in all respects only as illustrative, and not restrictive. The scope of the invention is, therefore, indicated by the appended claims, rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.
This application claims the benefit of United States Provisional Application No. 61/429,273, filed Jan. 3, 2011, entitled MULTI-TOUCH INPUT APPARATUS AND ITS INTERFACE METHOD USING DATA FUSION OF A SINGLE TOUCHPAD AND AN IMAGING SENSOR, which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
61429273 | Jan 2011 | US |