This invention relates generally to touch-sensitive display surfaces, and more particularly to emulating a mouse by touching a multi-touch sensitive display surface.
With personal computers, there are two basic ways to control the movement of a cursor on a display screen: indirect and direct. In the most common way, a mouse or a finger on a touch pad is moved on a horizontal work surface, such as a tabletop, desktop or laptop, while the cursor moves on a vertical display surface. The input and display spaces are disjoint. With touch-sensitive direct-touch display surfaces, the cursor follows the movement of a finger or stylus in direct contact with the display surface, and is usually positioned directly under the contact point. The display space and the input space are the same space and are calibrated to coincide.
In cursor control, two modes are typically recognized for manipulating the cursor: positioning and engagement. Positioning mode simply moves the cursor over the displayed content without explicitly altering or actively interacting with the content, while engagement actively interacts with the content, e.g., moving a selected window or changing the appearance of the selected content. In a traditional desktop environment, positioning the cursor is typically done by moving the mouse; engagement is achieved by pressing one or more mouse buttons and possibly also moving the mouse. Typical operations in the engagement mode include dragging, i.e., moving the cursor with a mouse button depressed, and clicking and double-clicking, i.e., quickly pressing and releasing a mouse button once or multiple times.
Note that typically, while positioning may cause visual changes in the displayed contents, the changes are incidental to the movement of the cursor; the changes are temporary, provided by the system/application, and are intended as feedback for the user. For example, some graphical user interface (GUI) elements provide ‘ToolTips’ that are triggered by a mouse-over; when the cursor is placed over such an element, an information bubble is displayed. As another example, when the cursor is moved into and out of a GUI element, the element may change its visual appearance, e.g., highlighting and un-highlighting itself to indicate that it is an active element. It is not until or unless a mouse button is activated that engagement occurs.
One of the more fundamental challenges for direct-touch input is that users may wish to move a cursor across a touch-sensitive display without engaging any ‘mouse’ buttons, e.g., simply move the cursor over an icon. However, when a user touches a touch-sensitive surface, it is difficult for the system to detect whether the touch was intended to simply move the cursor or to interact with content, e.g., to ‘drag’ content with the cursor, as is done with indirect-control by holding down the left mouse button during the movement.
Thus, direct touch systems suffer from a different variant of the well known ‘Midas touch’ problem, i.e., every touch is significant, see Hansen, J., Andersen, A., and Roed, P., “Eye gaze control of multimedia systems,” ACM Symposium on Eye Tracking Research & Applications, 1995.
It is instructive to consider how other touch surfaces deal with this problem, even though most are not designed for a large touch-sensitive display surfaces.
The touch pad found on most laptop computers usually also includes left and right mouse buttons. There is also a mechanism to switch between modes without using the buttons. A user can switch between moving the cursor and dragging the cursor by tapping once on the pad, and then quickly pressing down continuously on the pad to drag the cursor. This sequence is recognized as being similar to holding down the left mouse button with indirect-control.
A second problem on a touch-sensitive display surface is that it can be difficult to precisely position a cursor with a relatively ‘large’ fingertip because the finger can obscure the very exact portion of the display surface with which the user desires to interact.
This problem can be solved by offsetting the cursor from the touch location. However, this forfeits one of the big advantages of a direct input surface, that is, the ability to directly touch the displayed content to be controlled.
Some resistive or pressure-based touch-sensitive surfaces typically use the average of two consecutive finger touch locations as the displayed position of the cursor. Laptop touch pads provide a single point of input. However, these are indirect input devices, and they do not address the problems of fluidly switching between positioning and engagement mouse modes. In the case of a laptop touchpad, auxiliary buttons may be provided to address the issue of fluidly switching between modes, but this does not solve the problem of having to rely on additional indirect input devices.
U.S. patent application Ser. No. 11/048264, “Gestures for touch sensitive input devices,” filed by Hotelling et al. on Jan. 31, 2005, describes methods and systems for processing touch inputs for hand held devices from a single user. That system reads data from a multipoint sensing device such as a multipoint touch screen. The data pertain to touch input with respect to the multipoint sensing device and the data identify multipoint gestures. In particular, the systems described are typically held in one hand, while operated by the other hand. That system cannot identify and distinguish multiple touches by different users. That is, the system cannot determine if the person touching the screen is the same person holding the device or some other person. Because the device is hand held, the number of different gestures is severely limited.
One direct touch-sensitive surface U.S. Pat. No. 6,670,561, “Coordinates input method,” issued to Aoki on Dec. 30, 2003 uses an average of two consecutive touch locations as the position of the cursor. However, with this particular technology it is not possible to detect whether one or multiple locations were simultaneously touched, which limits the usefulness of the device. For example, the device requires a dedicated on-screen ‘right click mode’ button to specify whether touches should be interpreted as left clicks or right clicks. This solution does not support positioning mode at all, avoiding the issue of how to emulate moving the cursor without holding down a button.
Another device uses a specially designed stylus, see U.S. Pat. No. 6,938,221, “User Interface for Stylus-Based User Input,” issued to Nguyen on Aug. 30, 2005; and U.S. Pat. No. 6,791,536, “Simulating Gestures of a Pointing Device using a Stylus and Providing Feedback Thereto,” issued to Keely et al. on Sep. 14, 2004. That device can detect ‘hovering,’ i.e., when the stylus is near the surface but not actually in contact with the surface. If the stylus is hovering, then the cursor is simply moved, i.e., positioned, and if the pen is in contact with the surface, then the cursor is dragged, i.e., engaged.
Right clicking is supported by holding a button on the stylus, by bringing the stylus in contact with the surface for an extended moment, or by selecting a ‘right click’ displayed menu icon to indicate that the next touch should be interpreted as a right click. It is the lack of the hovering state, as opposed to two others states of touching or not touching, which makes emulating both mouse positioning and engagement modes so difficult on most touch surfaces. In most cases, such devices support only one of the modes—either positioning or engagement, with no smooth transition between the two.
It is desired to emulate a mouse by touching a multi-touch sensitive display surface.
The embodiments of the invention emulate mouse-like control with a multi-touch sensitive display surface. As defined herein, position and positioning apply to a displayed cursor, and location and locating apply to touches on the surface. That is, the positioning is virtual and relates to displaying a cursor or other graphic objects in an image displayed on the surface. The locating is physical, and relates to the physical sensing of contacts by fingers or the whole hand. Note that the methods as described herein are applicable to any multi-touch touch-sensitive device. Our preferred embodiment uses the touch surface as a table, but an orientation of the surface could be any, e.g., wall, table, angled-surface.
It is desired to emulate a hand operated ‘mouse’ by touching the surface directly, for example with one or more fingers, one or two hands, a first and the like. It should be noted that the actions taken by the computer system depend on the underlying application programs that respond to the mouse events generated by the touching.
Multiple touches or gestures can be sensed concurrently for a single user or multiple users. It is also possible to identify particular users with the touches, even while multiple users touch the surface concurrently. Images are displayed on the surface by the projector 130 according to the touches as processed by the processor 140. The images include sets of graphic objects. A particular set can include one or more objects. The displayed objects can be items such as text, data, images, menus, icons, and pop-up items. In our preferred embodiment the touch-surface is front-projected; the display technology is independent of our interaction techniques. Our techniques can be used with any multi-touch touch-sensitive surface regardless of how the images are displayed.
We prefer to use a direct-touch display surface that is capable of sensing multiple locations touched concurrently by multiple users, see Dietz et al., “DiamondTouch: A multi-user touch technology,” Proc. User Interface Software and Technology (UIST) 2001, pp. 219-226, 2001, and U.S. Pat. No. 6,498,590 “Multi-user touch surface, issued to Dietz et al., on Dec. 24, 2002, incorporated herein by reference. Hand gestures are described in U.S. patent application Ser. No. 10/659,180, “Hand Gesture Interaction with Touch Surface,” filed by Wu et al., on Sep. 10, 2003, incorporated herein by reference.
As a feature, the multi-touch sensitive display surface according to the invention does not require any physical buttons as found on a mouse, or other user interface.
Displayed graphic objects are controlled arbitrarily by touching the surface at or near locations where the objects are displayed. By controlling, we mean that the objects can be moved, dragged, selected, highlighted, rotated, resized, re-oriented, etc, as they would by a mechanical mouse. Re-orientation is defined as a translation and a rotation of the item with a single touching motion. The touching can be performed by fingers, hands, pointing or marking devices, such as a stylus or light pen, or other transducers appropriate for the display surface.
In order for mouse emulation to be smooth and natural on such a multi-touch sensitive display surface, a number of things are desired.
First, it is required to precisely position the cursor, a type of graphic object, on the display surface. This is a particular problem when fine positioning is attempted with a finger because the physical location of the finger typically obscures the virtual position of the cursor on the display surface.
Second, there must be a simple mechanism to switch between positioning mode, i.e., just moving the cursor, and engagement mode, i.e., dragging, or drawing.
Third, it is undesirable for this switching mechanism to require movement of the cursor itself. For example, after the cursor is moved to the display position that coincides with the physical location of the finger on the multi-touch sensitive surface, the cursor should remain at the same location during the switching.
Fourth, and perhaps most important, any solution for emulating mouse control should “feel” very easy and natural.
According to one embodiment of the invention, when a user touches the touch-sensitive surface with one finger, the system behaves as though a left mouse button is pressed. This facilitates a simple and intuitive behavior when the user is performing common operations such as scrolling, dragging, and drawing.
However, this makes it awkward to perform ‘mouse-over’ operations such as positioning the cursor to activate menu items, and tool tips, and image rollovers in web pages, wherein moving the cursor over images changes the appearance of the images. If the left mouse button is held down during what would normally be a mouse-over operation, then the text may become unexpectedly selected, for example.
As shown in
As shown in
In practice, it seems most natural to use the thumb and middle finger of one hand to enter the cursor positioning mode. This allows the index finger to be used for tapping in between the other two fingers.
However, if the hand obscures the cursor or other displayed content, then the user can use two index fingers 501-502 to locate the cursor as shown in
It seems to be most natural and stable for a human hand to use the thumb and middle finger of one hand to specify the cursor position. The two fingers tend to ‘anchor’ the touch, which is particularly important when trying to precisely position of the cursor.
To emulate clicking the left mouse button, the user simply taps quickly at a desired location. To emulate double-clicking with the left mouse button, the user simply taps twice quickly at the desired location.
According to an embodiment, to emulate pressing down the right mouse button, the user presses one finger down on the surface at the desired location, and then immediately taps elsewhere (down and up) with a second finger at an arbitrary second location. Subsequently moving the first finger effectively emulates dragging with the right mouse button depressed. After the second finger has tapped the surface, when the user stops pressing with the first finger, the system will emulate releasing the right mouse button. To emulate a right-click (button pressed and then released), the user simply presses with a first finger at the desired click location, taps briefly with a second finger, and then releases (stops touching) with the first finger. The state diagram for single-clicking and dragging with the right mouse button is shown in
According to an embodiment, to emulate pressing down the middle mouse button, the user presses one finger down on the surface at the desired location, and then immediately taps twice elsewhere (down and up, but twice) with a second finger at an arbitrary second location. Subsequently moving the first finger will effectively emulate dragging with the middle mouse button depressed. After the second finger has tapped the surface twice, when the user stops pressing with the first finger, the system will emulate releasing the middle mouse button. To emulate a middle-click (button pressed and then released), the user simply presses with the first finger at the desired click location, taps briefly twice with the second finger, and then releases (stops touching) with the first finger. The state diagram for single-clicking and dragging with the middle mouse button is shown in
According to an embodiment, a user may emulate moving the mouse cursor, i.e. repositioning the mouse cursor with no mouse buttons engaged. To do this, starting, as shown in
Therefore,
According to this embodiment of the invention, to emulate rotating a mouse wheel, the user presses one first down on the surface, and then slides that first up/away or down/closer to emulate scrolling the mouse wheel up or down. This embodiment relies on the fact that the system can determine a size of an area being touched. In this case, the area touched by a fingertip is substantially smaller than an area being touched by a closed fist. The ratio of sliding amount to resultant mouse wheel rotation amount may be configurable. This is shown in
It is to be understood that various other adaptations and modifications may be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.
This Application is a Continuation of prior U.S. patent application Ser. No. 11/416,719, filed May 3, 2006, by Esenther et al.
Number | Date | Country | |
---|---|---|---|
Parent | 11416719 | May 2006 | US |
Child | 13194597 | US |