A variety of different user interfaces have been developed to allow humans to control machines. In the world of computers, various different graphical user interfaces are used in an attempt to make operating a computer more intuitive. One popular graphical user interface utilizes a desktop metaphor. The desktop metaphor uses a computer display as a virtual desktop upon which documents and folders of documents can be placed. Documents can take the form of text documents, photographs, movies, and various other content. A document can be opened into a window, which may represent a paper copy of the document placed on the virtual desktop.
While much work has been put into advancing the desktop metaphor, users continually seek easier ways to interact with digital content.
Predictive gesturing for use within a graphical user interface is provided. The predictive gesturing may be implemented on a variety of different computing platforms, including surface computing systems. Predictive gesturing facilitates the learning and execution of gestures that are used to control a graphical user interface. When a user begins to perform a gesture, a predictive-gesturing engine predicts which gestures the user may be attempting, and a rendering engine displays clues for completing the predicted gestures. As the user continues the gesture, the predictive-gesturing engine may progressively eliminate clues that are associated with predicted gestures from which the user gesture has diverged.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
The present disclosure is directed to predictive gesturing in a graphical user interface that is at least partially controllable by user gestures. The following description provides a surface computing system as one possible example of a virtual workspace environment in which user gestures can be used to control a computing platform having a graphical user interface. However, other computing platforms can be used in accordance with the present disclosure. For example, while the below description refers to a user gesture in the form of a user finger interacting with the input surface of a surface computing system, a functionally analogous input may take the form of a computer mouse controlling a virtual pointer.
The predictive gesturing described below is considered to be applicable across a wide range of computing platforms and is not limited to surface computing systems. As such, the below description of a user gesture includes surface computing gestures without necessarily being restricted to only those gestures performed on a surface computing system. Predictive gesturing is also applicable to gestures made using mice, trackballs, trackpads, input pens, and other input devices for graphical user interfaces. Predictive gesturing may be implemented as a feature within a specific application or as a global feature of a computing device.
Surface computing system 100 includes a gesture input 110 that is configured to translate a user gesture into a command for controlling the surface computing system. The gesture input may recognize the position of a user gesture relative to the display, and map the user gesture to a corresponding portion of the display. It may be said that the gesture input is operatively aligned with the display.
As used herein, the term gesture is used to refer to any user motion that can be detected by gesture input 110. Gestures can be performed in short or long movements, arbitrary or prescriptive movements, and straight-forth or non-intuitive movements. Gestures can be performed with a single contact, such as a finger, pen, hand, or any other input device. Gestures can also be performed with more than one contact, such as two fingers, two hands, etc. Nonlimiting examples of gestures include tracing an “S” shape over one or more virtual objects to execute a save command, circling one or more virtual objects to select the virtual objects, and dragging one or more virtual objects to move the virtual objects.
Various aspects of a gesture can be used to distinguish one gesture from another. One distinguishing aspect is the path of the gesture, which can be referred to as the gesture path. Other aspects that can be used to distinguish gestures are the distance the gesture covers and/or the speed with which the gesture is made.
The gesture input may recognize and track a user gesture via a touch sensitive surface, such as a capacitive and/or resistive touch screen. The gesture input may additionally or alternatively recognize and track a user gesture via an optical monitoring system that effectively views an input surface operatively aligned with the display to detect finger movement at or around the input surface. These or other input mechanisms can be used without departing from the scope of the present disclosure. As used herein, the term gesture input is used to refer to the actual surface with which a user interacts, as well as any complementary electronics or other devices that work to translate user gestures into commands that can be used to control the surface computing system.
Gesture input 110 allows a user to use a finger, or the like, to touch and manipulate interactive user interface elements and virtual objects in the virtual workspace of a surface computing system. A gesture input can enable users to avoid at least two interaction intermediaries that are present with other input mechanisms. First, the gesture input does not rely on an external device, such as a computer mouse, to control an on-screen cursor or pointer. Second, the use of on-screen scroll-bars or similar controls that manipulate other on-screen elements may be limited, if not avoided altogether. The gesture input may allow a user to directly touch and manipulate a virtual object, such as a list, without having to use a mouse, or other input device, to control an on-screen cursor, that in turn controls on-screen control elements, such as scroll-bars.
A surface computing system may be configured to recognize a large number of different gestures, each of which may correspond to a different command. Some of the gestures may be simple and intuitive, and thus, easy for a user to learn. Other gestures may be more complicated and/or less intuitive. Such gestures may be more difficult for a user to learn and/or remember.
As shown in
The gesture-predicting engine analyzes user gestures that the gesture input receives. In particular, the gesture-predicting engine predicts which gestures a user may be attempting, or which gestures are possible, based on the beginning portion of a particular user gesture. The gesture-predicting engine predicts the possible commands that are associated with the gestures that could be completed from the beginning of the analyzed user gesture. As a user gesture continues, the gesture-predicting engine may progressively eliminate commands associated with gestures that do not match the analyzed user gesture.
For example,
The catalogued gestures that have the same beginning, or at least a similar beginning, as the user gesture can be flagged as possibilities. As the user gesture is beginning, there may be a very large number of possibilities. As the user gesture continues and diverges from some of the possibilities, some possibilities may be eliminated.
As shown at 214, once the number of possible gestures has been sufficiently narrowed, the rendering engine may use the display to indicate the possible commands associated with the beginning of the analyzed gesture. The rendering engine may indicate the plurality of different possible commands at least in part by presenting, for each possible command, a hint for completing a user gesture associated with that possible command. The hint may include gesture path directions that show the user how to complete the gesture associated with a particular command. The hint may additionally or alternatively include a command shortcut that allows the user to perform a shortcut gesture in order to invoke the associated command.
Gesture path directions may include a virtual trail that a user can trace in order to complete a gesture. The virtual trail may be displayed in a manner that indicates that it is a path that may be followed. In the illustrated embodiment, gesture path directions 220 and 222 are represented as dashed lines. In some embodiments, a label that names the associated command may be associated with the gesture path directions. The label may include letters, numbers, symbols, icons, or other indicia for identifying the gesture and/or the command associated with the gesture. In some embodiments, such a label may serve as a command shortcut. In the illustrated embodiment, command shortcuts 224 and command shortcut 226 are represented as words naming the commands associated with the respective gestures.
A command shortcut may include a virtual button that may be pressed to invoke the associated command. The virtual button may take the form of a label that names the command associated with the gesture. The command shortcut provides a user with an opportunity to perform a shortened version of the gesture in order to invoke the associated command. For example, a user may begin a gesture along its gesture path and then shortcut the gesture by moving directly to the virtual button. A command shortcut may be placed at virtually any location within the virtual workspace. As nonlimiting example, the command shortcut may be placed near a user's finger, so as to provide the user with easy access to the command shortcut. As another example, the command shortcut may be placed along on or near the gesture path directions, so as to reinforce teaching of the gesture.
The example illustrated in
As illustrated in
At 304, the rendering engine indicates gestures that remain valid. At 306, the rendering engine removes gestures that are no longer valid. For example, as shown in
At 404, the rendering engine indicates gestures that remain valid. At 406, the rendering engine removes gestures that are no longer valid. For example, as shown in
In some situations, a user gesture may continue along indicated gesture path directions while at the same time aiming toward a command shortcut associated with a different gesture. In such cases, the rendering engine may continue to present both options until the user gesture diverges.
As discussed above, once predicted gestures and/or associated command shortcuts are displayed, a user may select and follow one of the displayed gesture paths or aim toward one of the displayed command shortcuts. Responsive to this continued user gesture, other displayed gesture paths and/or command shortcuts may be eliminated as viable choices, and the eliminated choices may be hidden. The remaining gesture paths and command shortcuts may elaborate and progressively show more options if available or necessary. This form of progressive disclosure enables the user interface to remain uncluttered while presenting useful information and choices to the user.
In some embodiments, a computing system may be configured to automatically invoke a command associated with the last possible gesture remaining after all other gestures are progressively eliminated. In other words, if a single gesture is the only remaining option, the user can stop completing the gesture to invoke the associated command.
The predictive gesturing capability can optionally be a feature that a user can turn on or off. When on, predicted gestures may appear in several ways. For example, predictions may appear without delay as soon as the system has recognized and narrowed the possibilities to a reasonable number of choices for the user. Alternatively, the user may start a gesture, then pause long enough to signal to the system that help is needed, at which point the system may display the possible gestures.
The gesture-predicting engine may be configured to determine if a user has previously demonstrated aptitude with a gesture. For example, if the same user has successfully executed an S-shaped, “save all” gesture a number of times, the gesture-predicting engine may remove that gesture from the list of possible gestures that the user may need assistance completing. As such, if the user begins a gesture that is consistent with the S-shaped, “save all” gesture after the rendering-engine has recognized the user's proficiency with that gesture, the rendering engine may refrain from indicating hints associated with that gesture, thus focusing more attention on other hints.
As shown in
In some embodiments, hints that have been removed may once again be displayed if a user pauses a gesture. Different and/or additional gestures may be displayed if the user continues to pause. In some embodiments, a computing device may have a mechanism for a user to proactively request additional hints and/or change the hints that are displayed, so that a user can find a hint that is associated with a command the user wishes to invoke.
Although the subject matter of the present disclosure has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.