This invention relates generally to touch sensitive surfaces, and more particularly to using touch surfaces to recognize and act upon hand gestures made by touching the surface.
Recent advances in sensing technology have enabled increased expressiveness of freehand touch input, see Ringel et al., “Barehands: Implement-free interaction with a wall-mounted display,” Proc CHI 2001, pp. 367-368, 2001, and Rekimoto “SmartSkin: an infrastructure for freehand manipulation on interactive surfaces,” Proc CHI 2002, pp. 113-120, 2002.
A large touch sensitive surface presents some new issues that are not present with traditional touch sensitive devices. Any touch system is limited by its sensing resolution. For a large surface, the resolution can be considerably lower that with traditional touch devices. When each one of multiple users can simultaneously generate multiple touches, it becomes difficult to determine a context of the touches. This problem has been addressed, in part, for single inputs, such as for mouse-based and pen-based stroke gestures, see André et al., “Paper-less editing and proofreading of electronic documents,” Proc. EuroTeX, 1999, Guimbretiere et al., “Fluid Interaction with high-resolution wall-size displays. Proc. UIST 2001, pp. 21-30, 2001, Hong et al., “SATIN: A toolkit for informal ink-based applications,” Proc. UIST 2000, pp. 63-72, 2001, Long et al., “Implications for a gesture design tool,” Proc. CHI 1999, pp. 40-47, 1999, and Moran et al., “Pen-based interaction techniques for organizing material on an electronic whiteboard,” Proc. UIST 1997, pp. 45-54, 1992.
The problem becomes more complicated for hand gestures, which are inherently imprecise and inconsistent. A particular hand gesture for a particular user can vary over time. This is partially due to the many degrees of freedom in the hand. The number of individual hand poses is very large. Also, it is physically demanding to maintain the same hand pose over a long period of time.
Machine learning and tracking within vision-based systems have been used to disambiguate hand poses. However, most of those systems require discrete static hand poses or gestures, and fail to deal with highly dynamic hand gestures, Cutler et al., “Two-handed direct manipulation on the responsive workbench,” Proc 13D 1997, pp. 107-114, 1997, Koike et al., “Integrating paper and digital information on EnhancedDesk,” ACM Transactions on Computer-Human Interaction, 8 (4), pp. 307-322, 2001, Krueger et al., “VIDEOPLACE—An artificial reality, Proc CHI 1985, pp. 35-40, 1985, Oka et al., “Real-time tracking of multiple fingertips and gesture recognition for augmented desk interface systems,” Proc FG 2002, pp. 429-434, 2002, Pavlovic et al., “Visual interpretation of hand gestures for human-computer interaction: A review,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 19 (7). pp. 677-695, 1997, and Ringel et al., “Barehands: Implement-free interaction with a wall-mounted display,” Proc CHI 2001, pp. 367-368, 2001. Generally, camera-based systems are difficult and expensive to implement, require extensive calibration, and are typically confined to controlled settings.
Another problem with an interactive touch surface that also displays images is occlusion. This problem has been addressed for single point touch screen interaction, Sears et al., “High precision touchscreens: design strategies and comparisons with a mouse,” International Journal of Man-Machine Studies, 34 (4). pp. 593-613, 1991 and Albinsson et al., “High precision touch screen interaction,” Proc CHI 2003, pp. 105-112, 2003. Pointers have been used to interact with wall-based display surfaces, Myers et al., “Interacting at a distance: Measuring the performance of laser pointers and other devices,” Proc. CHI 2002, pp. 33-40, 2002.
It is desired to provide a gesture input system for a touch sensitive surface that can recognize multiple simultaneous touches by multiple users.
It is an object of the invention to recognize different hand gestures made by touching a touch sensitive surface.
It is desired to recognize gestures made by multiple simultaneous touches.
It is desired to recognize gestures made by multiple users touching a surface simultaneously.
A method according to the invention recognizes hand gestures. An intensity of a signal at touch sensitive pads of a touch sensitive surface is measured. The number of regions of contiguous pads touched simultaneously is determined from the intensities of the signals. An area of each region is determined. Then, a particular gesture is selected according to the number of regions touched and the area of each region.
The invention uses a touch surface to detect hand gestures, and to perform computer operations according to the gestures. We prefer to use a touch surface that is capable of recognizing simultaneously multiple points of touch from multiple users, see Dietz et al., “DiamondTouch: A multi-user touch technology,” Proc. User Interface Software and Technology (UIST) 2001, pp. 219-226, 2001, and U.S. Pat. No. 6,498,590 “Multi-user touch surface,” issued to Dietz et al., on Dec. 24, 2002, incorporated herein by reference. This touch surface can be made arbitrarily large, e.g., the size of a tabletop. In addition, it is possible to project computer generated images on the surface during operation.
By gestures, we mean moving hands or fingers on or across the touch surface. The gestures can be made by one or more fingers, by closed fists, or open palms, or combinations thereof. The gestures can be performed by one user or multiple simultaneous users. It should be understood that other gestures than the example gestures described herein can be recognized.
The general operating framework for the touch surface is described in U.S. patent application Ser. No. 10/053,652 “Circular Graphical User Interfaces” filed by Vernier et al., on Jan. 18 2002, incorporated herein by reference. Single finger touches can be reserved for traditional mouse-like operations, e.g., point and click, select, drag, and drop, as described in the Vernier application.
Signal intensities 103 of the coupling can be read independently for each column along the x-axis, and for each row along the y-axis. Touching more pads in a particular row or column increases the signal intensity for that row or column. That is, the measured signal is proportional to the number of pads touched. It is observed that the signal intensity is generally greater in the middle part of a finger touch because of a better coupling. Interestingly, the coupling also improves by applying more pressure, i.e., the intensity of the signal is coarsely related to touching pressure.
The rows and columns of antennas are read along the x- and y-axis at a fixed rate, e.g., 30 frames/second, and each reading is presented to the software for analysis as a single vector of intensity values (x0, x1, . . . , xm, Y0, Y1, . . . , yn), for each time step. The intensity values are thresholded to discard low intensity signals and noise.
In
Finger touches are readily distinguishable from a fist, and an open hand. For example, a finger touch has relatively high intensity values concentrated over a small area, while a hand touch generally has lower intensity values spread over a larger area.
For each frame, the system determines the number of regions. For each region, determine an area and location. The area is determined from an extent (xlow, xhigh, ylow, xhigh) of the corresponding intensity values 104. This information also indicates where the surface was touched. A total signal intensity is also determined for each region. The total intensity is the sum of the thresholded intensity values for the region. A time is also associated with each frame. Thus, each touched region is described by area, location, intensity, and time. The frame summary is stored in a hash table, using a time-stamp as a hash key. The frame summaries can be retrieved at a later time.
The frame summaries are used to determine a trajectory of each region. The trajectory is a path along which the region moves. A speed of movement and a rate of change of speed (acceleration) along each trajectory can also be determined from the time-stamps. The trajectories are stored in another hash table.
As shown in
For classification, it is assumed that the initial state is no touch, and the gesture is classified when the number of regions and the frame summaries remain relatively constant for a predetermined amount of time. That is, there are no trajectories. This takes care of the situation where not all fingers or hands reach the surface at exactly the same time to indicate a particular gesture. Only when the number of simultaneously touched regions remains the same for a predetermined amount of time is the gesture classified.
After the system enters a particular mode i after gesture classification as shown in
It should be noted that the touch surface as described here enables a different type of feedback than typical prior art touch and pointing devices. In the prior art, the feedback is typically based on the x and y coordinates of a zero-dimensional point. The feedback is often displayed as a cursor, pointer, or cross. In contrast, the feedback according to the invention can be area based, and in addition pressure or signal intensity based. The feedback can be displayed as the actual area touched, or a bounding perimeter, e.g., circle or rectangle. The feedback also indicates that a particular gesture or operating mode is recognized.
For example, as shown in
As shown in
In
As shown in
Therefore, as shown in
One can think of the fingers being in a control space that is associated with a virtual window 804 spatially related to the selection box 801. Although the selection box halts at an edge of the document 202, the virtual window 804 associated with the control space continues to move along with the fingers and is consequently repositioned. Thus, the user can control the selection box from a location remote from the displayed document. This solves the obstruction problem. Furthermore, the dimensions of the selection box continue to correspond to the positions of the fingers. This mode of operation is maintained even if the user uses only two fingers to manipulate the selection box. Fingers on both hands can also be used to move and size the selection box. Touching the surface with another finger or stylus 704 performs the copy. Lifting all fingers terminates the cut-and-paste.
As shown in
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.