Touch-sensitive display screens have become increasingly common as an alternative to traditional keyboards and other human-machine interfaces (“HMI”) to receive data entry or other input from a user. Touch screens are used in a variety of devices including both portable and fixed location devices. Portable devices with touch screens commonly include, for example, mobile phones, personal digital assistants (“PDAs”), and personal media players that play music and video. Devices fixed in location that use touch screens commonly include, for example, those used in vehicles, point-of-sale (“POS”) terminals, and equipment used in medical and industrial applications.
The ability to directly touch and manipulate data on a touch-screen has a strong appeal to users. In many respects, touch-screens can be used as a more advantageous input mechanism than the traditional mouse. When using a touch-screen, a user can simply tap the screen directly on the graphical user interface element (e.g., a icon) they wish to select rather than having to position a cursor over the user interface with a mouse.
Touch screens can serve both to display output from the computing device to the user and receive input from the user. The user's input options may be displayed, for example, as control, navigation, or object icons on the screen. When the user selects an input option by touching the associated icon on the screen with a stylus or finger, the computing device senses the location of the touch and sends a message to the application or utility that presented the icon.
Conventional touch screen input devices can be problematic for visually impaired users because they are not able to visually judge the alignment of their finger or stylus with the desired graphical user interface element appearing on the screen prior to contacting it. In addition, they do not have a means to verify the impact of touching the screen prior making contact with it, by which time the underlying application will have already acted in response to that contact.
To overcome this limitation, in one implementation a touch screen input device is provided which simulates a 3-state input device such as a mouse. One of these states is used to preview the effect of activating a graphical user interface element when the screen is touched. In this preview state touching a graphical user interface element on the screen with a finger or stylus does not cause the action associated with that element to be performed. Rather, when the screen is touched while in the preview state audio cues are provided to the user indicating what action would arise if the action associated with the touched element were to be performed.
In some implementations, once the user has located a graphical user interface element that he or she desires to select, the user can place a second finger or stylus on the touch screen while the first finger or stylus maintains contact with the element. In this way the desired graphical user interface element can be activated. That is, placing the touch screen in the second state by making contact with a second finger or stylus causes the underlying application to respond as it would when that element is selected using a conventional input device.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
The touch sensor component sits on top of the display component. The touch sensor is transparent so that the display may be seen through it. Many different types of touch sensor technologies are known and may be applied as appropriate to meet the needs of a particular implementation. These include resistive, capacitive, near field, optical imaging, strain gauge, dispersive signal, acoustic pulse recognition, infrared, and surface acoustic wave technologies, among others. Some current touch screens can discriminate among multiple, simultaneous touch points and/or are pressure-sensitive. Interaction with the touch screen 110 is typically accomplished using fingers or thumbs, or for non-capacitive type touch sensors, a stylus may also be used.
Other illustrative form factors in which the computing device may employed are shown in
While many of the form-factors shown in
In order to facilitate an understanding of the methods, techniques and systems described herein, it may be helpful to compare the operation of a conventional mouse with a conventional touch screen input device using state diagrams to model their functionality.
First, when a mouse is out of its tracking range (such as occurs when a mechanical mouse is lifted off a surface), the mouse is in a state 0 that may be referred to as out-of-range. Next, consider a mouse that is within its tracking range but without any of its buttons being depressed. This state may be referred to as tracking, which describes a state in which a cursor or pointer appearing on the screen follows the motion of the mouse. The tracking state may be referred to as state 1. In the tracking state the cursor or pointer can be positioned over any desired graphical user interface element by moving the mouse. The mouse can also operate in a second state (referred to as state 2) when a button is depressed. In this state, which can be referred to as dragging, graphical user interface elements or objects are moved (“dragged”) on the display so that they follow the motion of the mouse. It should be noted that the act of selecting an icon may be considered a sub-state of the dragging state since selecting involves depressing and releasing a button.
The lack of a tracking state in a conventional touch screen input device can be overcome by sighted users because they are able to visually judge the alignment of their finger or stylus with the desired graphical user interface element appearing on the screen prior to contacting it. Visually impaired users, however, do not have a means to verify the impact of touching the screen prior making contact with it, by which time the underlying application will have already acted in response to that contact.
To overcome this limitation, a touch screen input device is provided which simulates a 3-state input device such as a mouse. The additional state is used to preview the effect of entering state 2 when the screen is touched. In this preview state touching a graphical user interface element on the screen does not cause the action associated with that element to be performed. Rather, when the screen is touched while in the preview state audio cues are provided to the user indicating what action would arise if the touch screen input device were to be in state 2.
Once the user has located a graphical user interface element that he or she desires to select, the user can enter state 2 by placing a second finger or stylus on the touch screen while the first finger or stylus maintains contact with the element. In this way the desired graphical user interface element can be activated. That is, placing the touch screen in the second state by making contact with a second finger or stylus causes the underlying application to respond as it would when that element is selected using a conventional input device.
As indicated in
In some implementations the touch state can be entered from the audio preview state by placing the second finger or stylus anywhere on the screen or, alternatively, on a predefined portion of the screen. In other implementations the user makes contact with the screen in close proximity with the first finger or stylus. For instance, in some cases the second finger or stylus makes contact within a predefined distance from the first finger or stylus. One such example is shown in
A host application 407 is typically utilized to provide a particular desired functionality. However, in some cases, the features and functions implemented by the host applications 407 can alternatively be provided by the device's operating system or middleware. For example, file system operations and input through a touch screen may be supported as basic operating system functions in some implementations.
An audio preview component 420 is configured to expose a variety of input events to the host application 407 and functions as an intermediary between the host application and the hardware-specific input controllers. These controllers include a touch screen controller 425, an audio controller 430 and possibly other input controllers 428 (e.g., a keyboard controller), which may typically be implemented as device drivers in software. Touch screen controller 425 interacts with the touch screen, which is abstracted in a single hardware layer 440 in
Thus, the audio preview component 420 is arranged to receive input events such as physical coordinates from the touch screen controller 425. The nature of the input events determines the state of the touch screen. That is, the manner in which the user contacts the screen with one or two fingers or styluses determines if the screen is in the out-of-range, audio preview or touch state. In the preview state, the audio preview component 420 then formulates the appropriate calls to the host application in order to obtain information concerning the functionality performed by the graphical user interface element that is being touched or contacted. For instance, if the host application 407 allows programmatic access, the audio preview component 420 can extract data in the host application 407 that identifies the graphical user interface element that the user has selected in either the audio preview state or the touch state. If the audio preview component 420 cannot programmatically access the contents of the host application 407, the host program may need to be written to incorporate appropriate APIs that can expose the necessary information to the audio preview component 420. The extracted data, typically in form of text, can undergo text-to-speech conversion using a text-to-speech converter or module accessed by the audio preview component 420. Alternatively, the extracted data may be used to generate audio data that is indicative of the function performed by activation of the graphical user interface element that is being touched or contacted. For instance, in some cases a distinct tone may be used to represent commonly used graphical user interface elements such as “save,” “close,” and the like. The audio preview component 420 can then expose the audio data to audio controller 434, which can send a drive signal to an audio generator in hardware layer 440 so that the audio can be rendered.
As used in this application, the terms “component” and “system” and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an instance, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a computer and the computer can be a component. One or more components may reside within a process and/or thread of execution and a component may be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a machine-readable computer program accessible from any computer-readable device or storage media. For example, computer readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory devices (e.g., card, stick, key drive . . . ). Of course, those skilled in the art will recognize that many modifications may be made to this configuration without departing from the scope or spirit of the claimed subject matter.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.