In various embodiments, the present invention relates to input mechanisms for controlling electronic devices, and more particularly to a touch-sensitive control area that extends beyond the edges of a display screen on such a device.
It is well-known to provide touch-sensitive screens for electronic devices. Touch-sensitive screens allow an electronic display to function as an input device, thus providing great flexibility in the type of interactions that can be supported. In many devices, touch-sensitive screens are used to replace pointing devices such as trackballs, mice, five-way switches, and the like. In other devices, touch-sensitive screens can supplement, or be supplemented by, other input mechanisms.
Touch-sensitive screens provide several advantages over other input mechanisms. Touch-sensitive screens can replace physical buttons by providing on-screen buttons that can be touched by the user. The on-screen buttons can be arranged so that they resemble an alphabetic or numeric keyboard, or they can have specialized functions. This often simplifies input operations by providing only those options that are relevant at a given time.
Touch-sensitive screens can also help to provide customizability and globalization of input mechanisms. An on-screen keyboard can be easily adapted to any desired language, and extra keys can be provided as appropriate to the specific application. Certain buttons can be highlighted, moved, or otherwise modified in a dynamic way to suit the application.
In addition, touch-sensitive screens can be more reliable than physical keyboards, because they reduce the reliance on moving parts and physical switches.
One particular advantage of touch-sensitive screens is that they allow direct manipulation of on-screen objects, for example by facilitating control and/or activation of such objects by touching, tapping, and/or dragging. Thus, when a number of items are displayed on a screen, touch-sensitivity allows a user to perform such operations on specific items in a direct and intuitive way.
However, some operations in connection with control of an electronic device are not particularly well suited to direct manipulation. These include operations that affect the entire screen, application environment, or the device itself. On-screen buttons can be provided to allow access to such operations, but such buttons occupy screen space that can be extremely valuable, especially in compact, mobile devices. In addition, providing on-screen buttons for such functions allows only a limited set of operations to be available at any given time, since there is often insufficient screen space to provide buttons for all such functions.
In some cases, on-screen buttons or objects are relatively small, causing some users to have difficulty activating the correct command or object, or even causing them to inadvertently cause the wrong command or object to be activated or manipulated. This problem, which is particularly prevalent in devices having small screens, can cause touch-screens to be relatively unforgiving in their interpretation of user input. In addition, as a natural consequence of combining an output device with an input device in the same physical space, the use of a touch-screen often causes users to obscure part of the screen in order to interact with it. Screens layouts may be designed so that important elements tend not to be obscured; however, such design may not take into account right- or left-handedness.
Another disadvantage of touch-sensitive screens is that their dynamic nature makes it difficult for users to provide input without looking at the screen. A user cannot normally discern the current state of the device without looking at it, and therefore cannot be sure as to the current location or state of various on-screen buttons and controls at any given time. This makes it difficult to control the device while it is one's pocket, or while one is engaged in a task that inhibits one's ability to look at the device.
What is needed is a system and method that provides the advantages of touch-sensitive screens while avoiding their limitations. What is further needed is a system and method that facilitates direct manipulation of on-screen objects while also providing mechanisms for performing commands for which direct manipulation is not well-suited. What is further needed is a system and method that provides access to a wide variety of commands and allows input of such commands in a simple, intuitive way, without cluttering areas of a display screen with an excess of buttons and controls.
According to various embodiments of the present invention, a touch-sensitive display screen is enhanced by a touch-sensitive control area that extends beyond the edges of the display screen. The touch-sensitive area outside the display screen, referred to as a “gesture area,” allows a user to activate commands using a gesture vocabulary. Commands entered in the gesture area can be independent of the current contents of the display screen. Certain commands can therefore be made available at all times without taking up valuable screen space, an advantage that is of particular benefit for small mobile devices.
In one embodiment, the present invention allows some commands to be activated by inputting a gesture within the gesture area. Other commands can be activated by directly manipulating on-screen objects, as in a conventional touch-sensitive screen. Yet other commands can be activated via a combination of these two input mechanisms. Specifically, the user can begin a gesture within the gesture area, and finish it on the screen (or vice versa), or can perform input that involves contemporaneous contact with both the gesture area and the screen. Since both the gesture area and the screen are touch-sensitive, the device is able to interpret input that includes one or both of these areas, and can perform whatever action is appropriate to such input.
In one embodiment, this highly flexible approach allows, for example, a command to be specified in terms of an action and an target: a particular gesture as performed in the gesture area can specify the action to be performed, while the particular on-screen location where the user finishes (or starts) the input can specify a target (such as an on-screen object) on which the command is to be performed. The gesture area can also be used to provide input that modifies a command entered by direct manipulation on the screen.
The ability to detect gestures allows a large vocabulary to be developed, so that a large number of commands can be made available without obscuring parts of the screen with buttons, menus, and other controls. The combination of such a gesture vocabulary with direct manipulation provides unique advantages not found in prior art systems.
In various embodiments, the present invention also provides a way to design a user interface that is simple and easy for beginners, while allowing sophisticated users to access more complex features and to perform shortcuts. Beginners can rely on the direct manipulation of on-screen objects, while the more advanced users can learn more and more gestures as they become more familiar with the device.
In addition, in various embodiments, the present invention provides a mechanism for providing certain commands in a consistent manner at all times where appropriate. The user can be assured that a particular gesture, performed in the gesture area, will cause a certain action to be performed, regardless of what is on the screen at a given time.
In various embodiments, the present invention also provides an input interface that is more forgiving than existing touch-sensitive screens. Users need not be as precise with their input operations, since a larger area is available. Some gestures may be performed at any location within the gesture area, so that the user need not be particularly accurate with his or her fingers when inputting a command. Users can also perform such gestures without obscuring a portion of the screen. Users can also more easily use the input mechanism when not looking at the screen, since gestures can be performed in the gesture area without reference to what is currently displayed on the screen.
Accordingly, the present invention in one embodiment provides a mechanism for facilitating access to a large number of commands in a limited space and without the need for a large number of on-screen buttons or physical buttons, and for providing the advantages of direct manipulation while avoiding its limitations.
Additional advantages will become apparent in the following detailed description.
The accompanying drawings illustrate several embodiments of the invention and, together with the description, serve to explain the principles of the invention. One skilled in the art will recognize that the particular embodiments illustrated in the drawings are merely exemplary, and are not intended to limit the scope of the present invention.
For purposes of the following description, the following terms are defined:
In various embodiments, the present invention can be implemented on any electronic device, such as a handheld computer, personal digital assistant (PDA), personal computer, kiosk, cellular telephone, and the like. For example, the invention can be implemented as a command input paradigm for a software application or operating system running on such a device. Accordingly, the present invention can be implemented as part of a graphical user interface for controlling software on such a device.
In various embodiments, the invention is particularly well-suited to devices such as smartphones, handheld computers, and PDAs, which have limited screen space and in which a large number of commands may be available at any given time. One skilled in the art will recognize, however, that the invention can be practiced in many other contexts, including any environment in which it is useful to provide access to commands via a gesture-based input paradigm, while also allowing direct manipulation of on-screen objects where appropriate. Accordingly, the following description is intended to illustrate the invention by way of example, rather than to limit the scope of the claimed invention.
Referring now to
For illustrative purposes, device 100 as shown in
In various embodiments, touch-sensitive screen 101 and gesture area 102 can be implemented using any technology that is capable of detecting a location of contact. One skilled in the art will recognize that many types of touch-sensitive screens and surfaces exist and are well-known in the art, including for example:
Any of the above techniques, or any other known touch detection technique, can be used in connection with the device of the present invention, to detect user contact with screen 101, gesture area 102, or both.
In one embodiment, the present invention can be implemented using a screen 101 and/or gesture area 102 capable of detecting two or more simultaneous touch points, according to techniques that are well known in the art. The touch points can all be located on screen 101 or on gesture area 102, or some can be located on each.
In one embodiment, the present invention can be implemented using other gesture recognition technologies that do not necessarily require contact with the device. For example, a gesture may be performed over the surface of a device (either over screen 101 or gesture area 102), or it may begin over the surface of a device and terminate with a touch on the device (either on screen 101 or gesture area 102). It will be recognized by one with skill in the art that the techniques described herein can be applied to such non-touch-based gesture recognition techniques.
In one embodiment, device 100 as shown in
In the example of
One skilled in the art will recognize that, in various embodiments, gesture area 102 can be provided in any location with respect to screen 101 and need not be placed immediately below screen 101 as shown in
In various embodiments, gesture area 102 can be visibly delineated on the surface of device 100, if desired, for example by an outline around gesture area 102, or by providing a different surface texture, color, and/or finish for gesture area 102 as compared with other surfaces of device 100. Such delineation is not necessary for operation of the present invention.
Referring now to
Referring now to
For illustrative purposes,
In general, in various embodiments, the user can input a touch command on device 100 by any of several methods, such as:
In one embodiment, as described above, the present invention provides a way to implement a vocabulary of touch commands including gestures that are performed within gesture area 102, within screen 101, or on some combination of the two. As mentioned above, gestures can also be performed over the surface of gesture area 102 and/or screen 101, without necessarily contacting these surfaces. The invention thus expands the available space and the vocabulary of gestures over prior art systems.
Referring now to
In one embodiment, device 100 allows for some variation in the angles of gestures, so that the gestures need not be precisely horizontal or vertical to be recognized. Device 100 is able to identify the user's intent as a horizontal or vertical gesture, or other recognizable gesture, even if the user deviates from the definitive, ideal formulation of the gesture.
In one embodiment, gestures can be recognized regardless of the current orientation of device 100. Thus, a particular gesture would generally have the same meaning whether device 100 is in its normal orientation or rotated by 180 degrees, 90 degrees, or some other amount. In one embodiment, device 100 includes orientation sensors to detect the current orientation according to well known techniques.
Commands Performed within Gesture Area 102
In the example of
In the example of
In the example of
In the example of
In one embodiment, the user can also initiate some commands by direct manipulation of objects 401 on screen 101. Direct manipulation is particularly well-suited to commands whose target is represented by an on-screen object 401. Examples are discussed below.
Focus/Act: In one embodiment, the user can tap on an object 401 or on some other area of screen 101 to focus the object 401 or screen area, or to perform an action identified by the object 401 or screen area, such as opening a document or activating an application.
Referring now to
Select/Highlight: Referring now to
In one embodiment, a modifier key, such as a shift key, is provided. Modifier key may be physical button 103, or some other button (not shown). Certain commands performed in touch-sensitive screen 101 can be modified by performing the command while holding down the modifier key, or by pressing the modifier key prior to performing the command.
For example, the user can perform a shift-tap on an object 401 by tapping on an object 401 or on some other area of screen 101 while holding the modifier key. In one embodiment, this selects or highlights an object 401, with-out de-selecting any other objects 401 that may have previously been selected.
In one embodiment, the modifier key can also be used to perform a shift-drag command. While holding the modifier key, the user drags across a range of objects 401 to select a contiguous group, as shown in
Referring now to
In one embodiment, a button can be shown, to provide access to a screen for performing more detailed editing operations. Referring now to
Scroll: Referring now to
In one embodiment, the user can also flick across screen 101 in a direction that supports scrolling for the current state of the display. In one embodiment, the flick must start immediately upon contact with screen 101, and the user's finger must leave the surface of screen 101 before stopping movement, in order to be recognized as a flick. The current display scrolls by an amount proportional to the speed and distance which the user flicked. A drag-scroll may be converted into a flick by lifting the finger before coming to a rest.
In one embodiment, if the display on screen 101 is already scrolling, then a tap or drag immediately interrupts the current scroll. If the user tapped, the current scroll stops. If the user dragged, a new drag-scroll is initiated.
Referring now to
Next/Previous: In certain embodiments and contexts, the user can drag across screen 101 horizontally to show the next or previous item in a sequence of items. This can be distinguished from a drag scroll by being executed perpendicular to the axis of scrolling.
Zoom: Referring now to
In one embodiment, the user can also double-tap (tap twice within some period of time) on a desired center point of a zoom operation. This causes the display to zoom in by a predetermined amount. In one embodiment, if the user taps on gesture area 102, the display zooms out by a predetermined amount.
Fit: Referring now to
Text Navigation: Referring now to
Referring now to
Move: Referring now to
In one embodiment, if, while performing the gesture 402P, the user drags the object 401A over a valid target object 401 that can act on or receive the dragged object 401A, visual feedback is provided to indicate that the potential target object 401 is a valid target. For example, the potential target object 401 may be momentarily highlighted while the dragged object 401A is positioned over it. If the user ends gesture 402P while the dragged object 401A is over a valid target object 401, an appropriate action is performed: for example, the dragged object 401A may be inserted in the target object 401, or the target object 401 may launch as an application and open the dragged object 401.
In a list view, a move operation can cause items in the list to be reordered. Referring now to
Delete: In one embodiment, the user can delete an item by performing a swipe gesture to drag the item off screen 101. Referring now to
As shown in
Another example of swipe gesture 402EE is shown in
One skilled in the art will recognize that other gestures 402 may be performed on screen 101, according to well-known techniques of direct manipulation in connection with touch-sensitive screens and objects displayed thereon.
Commands Performed by Combining Gestures in Gesture Area 102 with Input on Touch-Sensitive Screen 101
In one embodiment, the device of the present invention recognizes commands that are activated by combining gestures 402 in gesture area 102 within input on touch-sensitive screen 101. Such commands may be activated by, for example:
One example of such a gesture 402 is to perform any of the previously-described gestures on screen 101 while also touching gesture area 102. Thus, the contact with gesture area 102 serves as a modifier for the gesture 402 being performed on screen 101.
Another example is to perform one of the previously-described gestures in gesture area 102, while also touching an object 401 on screen 101. Thus, the contact with the object 401 serves as a modifier for the gesture 402 being performed in gesture area 102.
In some embodiments, the display changes while a user is in the process of performing a gesture in gesture area 102, to reflect current valid targets for the gesture. In this manner, when a user begins a gesture in gesture area 102, he or she is presented with positive feedback that the gesture is recognized along with an indication of valid targets for the gesture.
Referring now to
Alternatively, in one embodiment, the use can perform a two-part gesture sequence: a tap gesture 402 in gesture area 102, followed by a tap, drag, or other gesture 402 on an on-screen object 401 or other area of screen 101 so as to identify the intended target of the gesture sequence. In one embodiment, the user can perform the tap gesture 402G anywhere within gesture area 102; in another embodiment, the gesture may have different meaning depending on where it is performed. In one embodiment, the sequence can be reversed, so that the target object 401 can be identified first by a tap on screen 101, and the action to be performed can be indicated subsequently by a gesture 402 in gesture area 102.
Referring now to
Referring now to
In the example of
Additional examples are shown in
In the example of
In one embodiment, a gesture performs the same function whether entered entirely within gesture area 102 (as in
Referring now to
In one embodiment, the user provides input in the form of contact with gesture area 102 and/or contact with touch-sensitive screen 101. As described above, if both surfaces are touched, the contact with gesture area 102 can precede or follow the contact with touch-sensitive screen 101, or the two touches can take place substantially simultaneously or contemporaneously.
In one embodiment, if device 100 detects 501 contact with gesture area 102, it identifies 502 a command associated with the gesture the user performed in touching gesture area 102. Then, if device 100 detects 503A contact with touch-sensitive screen 101, it executes 504 a command identified by the contact with gesture area 102 and with touch-sensitive screen 101. For example, the gesture area 102 gesture may identify the command and the screen 101 gesture may specify a target for the command, as described in more detail above. If, in 503A, device 100 does not detect contact with touch-sensitive screen 101, it executes 505 a command identified by the contact with gesture area 102.
In one embodiment, if, in 501, device 100 does not detect contact with gesture area 102, but it detects 503B contact with touch-sensitive screen 101, it executes 506 a command identified by the contact with touch-sensitive screen 101. For example, screen 101 gesture may specify an action and a target by direct manipulation such as by tapping, as described in more detail above.
In one embodiment, if device 100 does not detect 501 contact with gesture area 102 and does not detect 503B contact with screen 101, no action is taken 507.
As can be seen from the above description, the present invention provides several advantages over prior art devices employing touch-sensitive surfaces and screens. By employing the techniques described above, the present invention simplifies operation of the device, and provides the potential to offer a user a large vocabulary of possible actions in a compact space. For example, beginners can use direct manipulation as the primary input mechanism, while expert users can use gestures.
The present invention has been described in particular detail with respect to one possible embodiment. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, as described, or entirely in hardware elements, or entirely in software elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.
Reference herein to “one embodiment”, “an embodiment” , or to “one or more embodiments” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. Further, it is noted that instances of the phrase “in one embodiment” herein are not necessarily all referring to the same embodiment.
Some portions of the above are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing module and/or device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Further, the computers referred to herein may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
The algorithms and displays presented herein are not inherently related to any particular computer, virtualized system, or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent from the description above. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references above to specific languages are provided for disclosure of enablement and best mode of the present invention.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments may be devised which do not depart from the scope of the present invention as described herein. In addition, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims.
The present application is related to U.S. patent application Ser. No. 11/379,552, filed Apr. 20, 2006 for “Keypad and Sensor Combination to Provide Detection Region that Overlays Keys”, the disclosure of which is incorporated herein by reference.