Interactive display systems, such as surface computing devices, include a display screen and a touch sensing mechanism configured to detect touches on the display screen. Various types of touch sensing mechanisms may be used, including but not limited to optical, capacitive, and resistive mechanisms. An interactive display system may utilize a touch sensing mechanism as a primary user input device, thereby allowing the user to interact with the device without keyboards, mice, or other such traditional input devices.
Various embodiments are described herein that relate to determining an intent of a user to initiate an action on an interactive display system. For example, one disclosed embodiment provides a method of initiating an action on an interactive display device, the interactive display device comprising a touch-sensitive display. The method comprises displaying an initiation control at a launch region of the display, receiving an initiation input via the initiation control, displaying a confirmation target in a confirmation region of the display in response to receiving the initiation input, receiving a confirmation input via the confirmation target, and performing an action responsive to the confirmation input.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter. Furthermore, the claimed subject matter is not limited to implementations that solve any or all disadvantages noted in any part of this disclosure.
As mentioned above, an interactive display device may utilize a touch-sensitive display as a primary input device. Thus, touch inputs, which may include gesture inputs and hover inputs (i.e. gestures performed over the surface of the display), may be used to interact with all aspects of the device, including applications and the operating system.
In some environments, such as where an interactive display device has a table-like configuration with a horizontal display, inadvertent touches may occur. The severity of the impact of such a touch input may vary, depending upon how the interactive display device interprets the inadvertent input. For example, an inadvertent touch in a “paint” program may result in the drawing of an inadvertent line or other such minor, reversible action that is not disruptive to other users, while an inadvertent touch that results in closing or restarting an application or operating system shell may be very disruptive to the user experience.
Accordingly, various embodiments are disclosed herein that relate to staged initiation of actions on an interactive display device to help avoid inadvertent touches that result in the execution of disruptive actions. Prior to discussing these embodiments, an example interactive display device 100 is described with reference to
Interactive display device 100 further includes a touch and/or hover detection system 104 configured to detects touch inputs and/or hover inputs on or near display 102. As mentioned above, the touch and/or hover detection system 104 may utilize any suitable mechanism to detect touch and/or hover inputs. For example, an optical touch detection system may utilize one or more cameras to detect touch inputs, e.g., via infrared light projected onto the display screen and/or via a frustrated total internal reflection (FTIR) mechanism. Likewise, an optical touch and/or hover detection system 104 may utilize a sensor-in-pixel display panel in which image sensor pixels are interlaced with image display pixels. Other non-limiting examples of touch and/or hover detection system 104 include capacitive and resistive touch detection systems.
Interactive display device 100 also includes a logic subsystem 106 and a data-holding subsystem 108. Logic subsystem 106 is configured to execute instructions stored in data-holding subsystem 108 to implement the various embodiments described herein. Logic subsystem 106 may include one or more physical devices configured to execute one or more instructions. For example, logic subsystem 106 may be configured to execute one or more instructions that are part of one or more applications, services, programs, routines, libraries, objects, components, data structures, or other logical constructs. Such instructions may be implemented to perform a task, implement a data type, transform the state of one or more devices, or otherwise arrive at a desired result.
Logic subsystem 106 may include one or more processors that are configured to execute software instructions. Additionally or alternatively, logic subsystem 106 may include one or more hardware or firmware logic machines configured to execute hardware or firmware instructions. Processors of logic subsystem 106 may be single core or multicore, and the programs executed thereon may be configured for parallel, distributed, or other suitable processing. Logic subsystem 106 may optionally include individual components that are distributed throughout two or more devices, which may be remotely located and/or configured for coordinated processing. One or more aspects of logic subsystem 106 may be virtualized and executed by remotely accessible networked computing devices configured in a cloud computing configuration.
Data-holding subsystem 108 may include one or more physical, non-transitory, devices configured to hold data and/or instructions executable by logic subsystem 106 to implement the herein described methods and processes. When such methods and processes are implemented, the state of the data-holding subsystem 108 may be transformed (e.g., to hold different data).
Data-holding subsystem 108 may include removable computer media and/or built-in computer-readable storage media and/or other devices. Data-holding subsystem 108 may include optical memory devices (e.g., CD, DVD, HD-DVD, Blu-Ray Disc, etc.), semiconductor memory devices (e.g., RAM, EPROM, EEPROM, etc.) and/or magnetic memory devices (e.g., hard disk drive, floppy disk drive, tape drive, MRAM, etc.), among others. Data-holding subsystem 108 may include devices with one or more of the following characteristics: volatile, nonvolatile, dynamic, static, read/write, read-only, random access, sequential access, location addressable, file addressable, and content addressable. In some embodiments, logic subsystem 106 and data-holding subsystem 108 may be integrated into one or more common devices, such as an application specific integrated circuit or a system on a chip.
As mentioned above, an inadvertent touch input may be interpreted by an interactive display device as a command to perform an action. For example, in some embodiments, an interactive display device 102 may take the form of a table or desk. As such, inadvertent touches may easily occur, for example, where a user rests a hand or elbow on the display. If such an inadvertent input occurs over a user interface control used for a disruptive action, such as a re-start or exit action, the inadvertent touch may be disruptive to the user experience.
As a more specific example, in the embodiment of
Because the unintended execution of a restart command (for example) would disrupt the user experience, interactive display device 102 utilizes a staged activation sequence to confirm a user's intent to perform such an action. In this manner, a user making an unintentional touch may avoid triggering the action. While the embodiments described herein utilize a two-stage activation sequence, it will be understood that other embodiments may utilize three or more stages.
Method 200 comprises, at 202, displaying an initiation control, such as an icon, in a launch region of the display and, at 204, receiving an initiation input in the launch region, wherein the initiation input comprises a touch interaction with the initiation control. It will be understood that the initiation control may be displayed persistently in the launch region, or may be displayed when a touch is detected in the launch region. The launch region comprises a portion of the display, such as active region 110 of
An initiation input made over the initiation control may be intended or inadvertent. Thus, the interactive display device does not perform the action until a confirmation input received. Thus, method 200 next comprises, at 206, displaying a confirmation target, such as a target icon and/or target text, in the confirmation region. The display of the confirmation target may signal to a user that the initiation touch has been recognized, and the target text may indicate the action that will be performed if a confirmation input is received. The term “confirmation target” as used herein signifies any user interface element with which a user interacts to confirm intent to perform a previously initiated action.
In the depicted embodiment, the target text 308 indicates the action to be performed if confirmed. As shown in the embodiment illustrated in
Returning to
The confirmation time interval may have any suitable duration. Suitable durations include, but are not limited to, durations suitable to allow a new user to understand the nature of the confirmation input, yet not to occupy display space for undesirably long time periods. While
Returning to
Such training elements may be displayed based on various gesture input characteristics, including, but not limited to, gesture speed and/or direction characteristics. For example, a training element may be displayed for gesture judged to be slower than a predetermined threshold speed or to have an incorrect path, as a less experienced user, possibly unsure about how the icon should be manipulated, may have a comparatively slower gesture input relative to more experienced and more confident users.
In some embodiments, a display of confirmation target 307 and/or initiation control 306 provide the function offered by one or more training elements. For example, an appearance of confirmation target 307 and/or initiation control 306 may be varied as the user performs the confirmation gesture, such variation being configured to indicate the user's progress toward successful performance of the gesture. It will be understood that suitable haptic cues, audible cues and/or visual animation cues may accompany a display of a training element.
As mentioned above, other touch inputs than a dragging gesture may be utilized as confirmation inputs. For example, as mentioned above, receiving a confirmation input may comprise receiving a tap input in a confirmation region. As a more specific example, an experienced user may elect to first tap control 306 and then tap target text 308 or target icon 310 to confirm the action the user intends the device to perform, rather than performing the dragging confirmation input. This combination may be comparatively faster for the user relative to a tap-and-drag sequence and thus may appeal to more skilled users. In response, in some embodiments, the display may show movement of initiation control 306 into target icon 310, to provide a visual cue that the confirmation input was performed successfully. In some embodiments, other suitable haptic cues, audible cues and/or visual animation cues may be provided to indicate successful performance of the confirmation input, while in some other embodiments, no cues may be provided other than cues accompanying performance of the initiated action (for example, a shutdown animation sequence accompanying shutdown of the device).
Once the interactive display device receives confirmation input, method 200 comprises, at 210, performing the action. For example,
It is to be understood that the configurations and/or approaches described herein are exemplary in nature, and that these specific embodiments or examples are not to be considered in a limiting sense, because numerous variations are possible. The specific routines or methods described herein may represent one or more of any number of processing strategies. As such, various acts illustrated may be performed in the sequence illustrated, in other sequences, in parallel, or in some cases omitted. Likewise, the order of the above-described processes may be changed.
The subject matter of the present disclosure includes all novel and nonobvious combinations and subcombinations of the various processes, systems and configurations, and other features, functions, acts, and/or properties disclosed herein, as well as any and all equivalents thereof.
This application claims priority to U.S. Provisional Patent Application Ser. No. 61/429,715, titled “Two-stage Access Points,” and filed on Jan. 4, 2011, the entirety of which is hereby incorporated herein by reference for all purposes.
Number | Date | Country | |
---|---|---|---|
61429715 | Jan 2011 | US |