Advances in software and hardware have resulted in new user interface problems. For example, some combinations of hardware and software enable a touch-screen computing device such as a mobile phone to simultaneously display output to the device's touch screen and to an external display. In such a case where the computing device displays interactive graphics on two displays, it is convenient to enable a user to use the touch-screen to direct input to a user interface displayed on the touch screen as well as to a user interface displayed on the external display. In this scenario, there is no efficient and intuitive way to enable a user to determine which user interface any particular touch input should be directed to (“user interface” broadly refers to units such as displays, application windows, controls/widgets, virtual desktops, and the like).
In general, it can be difficult to perform some types of interactions with touch input surfaces. For example, most windowing systems handle touch inputs in such a way that most touch inputs are likely to directly interact with any co-located user interface; providing input without interacting with an underlying user interface is often not possible. Moreover, when multiple user interfaces can potentially be targeted by a touch input, it has not been possible for a user to use formation of the touch input as a way to control which user interface will receive the touch input. Instead, dedicated mechanisms have been needed. For example, a special user interface element such as a virtual mouse or targeting cursor might be manipulated to designate a current user interface to be targeted by touch inputs.
In addition, it is sometimes desirable to differentiate between different sets of touch gestures. A gesture in one set might be handled by one user interface and a gesture in another set might be handled by another user interface. For example, one set of gestures might be reserved for invoking global or system commands and another set of gestures might be recognized for applications. Previously, sets of gestures have usually been differentiated based on geometric attributes of the gestures or by using reserved display areas. Both approaches have shortcomings. Using geometric features may require a user to remember many forms of gestures and an application developer may need to take into account the unavailability of certain gestures or gesture features. In addition, it may be difficult to add a new global gesture since existing applications and other software might already be using the potential new gesture. Reserved display areas can limit how user experiences are managed, and they can be unintuitive, challenging to manage, and difficult for a user to discern.
Only the inventors have appreciated that sensing surfaces that measure and output the pressure of touch points can be leveraged to address some of the problems mentioned above. User interaction models that use pressure-informed touch input points (“pressure points”) are described herein.
The following summary is included only to introduce some concepts discussed in the Detailed Description below. This summary is not comprehensive and is not intended to delineate the scope of the claimed subject matter, which is set forth by the claims presented at the end.
Embodiments relate to using pressure of user inputs to select user interfaces and user interaction models. A computing device handling touch inputs that include respective pressure measures evaluate the pressure measures to determine how the touch inputs are to be handled. In this way, a user can use pressure to control how touch inputs are to be handled. In scenarios where multiple user interfaces or displays managed by a same operating system are both capable of being targeted by touch input from a same input device, user-controlled pressure can determine which display or user interface touch inputs will be associated with. Touch inputs can be directed, based on pressure, by modifying their event types, passing them to particular responder chains or points on responder chains, for example.
Many of the attendant features will be explained below with reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein like reference numerals are used to designate like parts in the accompanying description.
The breakdown of functionality of modules shown in
The sensing surface 122 outputs raw pressure points 124, each of which has device coordinates and a measure of pressure, for instance between zero and one. The hardware stack 108 receives the raw pressure points 124 which are passed on by a device driver 126. At some point between the hardware stack 108 and the windowing system 112 the raw pressure points are converted to display coordinates and outputted by the windowing system 112 as input events 128 to be passed down through a chain of responders or handlers perhaps starting within the windowing system 112 and ending at one or more applications.
For discussion, pressure levels will be assumed to range linearly from 0 to 1, where 0 indicates no pressure, 1 indicates full pressure, 0.5 represents half pressure, and so forth. Also for discussion, simple pressure conditions will be assumed; the first and third pressure conditions 204, 208 are “is P below 0.5”, and the second and fourth pressure conditions 206, 208 are “is P above 0.5”. However, complex conditions can also be used, which will be described further below.
As noted above, the state machine controls which of the potential user interfaces input events are to be associated with. When a new input event 128 is received, the state machine determines whether its state should change to a new state based on the current state of the state machine. If a new input event is received and the state machine is in the upper layer state, then the pressure of the input event is evaluated against the first and second pressure conditions 204, 206 (in the case where the conditions are logically equivalent then only one condition is evaluated). If a new input event is received and the state machine is in the lower layer state, then the pressure of the input event is evaluated against the third and fourth pressure conditions 208, 210.
If the state machine is in the upper layer state and the input event has a pressure of 0.3, then the state machine stays in the upper layer state. If the state machine is in the upper layer state and the input event has a pressure of 0.6, then the state machine transitions to the lower layer state. The input event is designated to whichever user interface is represented by the state that is selected by the input event. Similarly, if the state machine is in the lower layer state when the input is received then the pressure is evaluated against the third and fourth conditions. If the input pressure is 0.2 then the fourth pressure condition is satisfied and the state transitions from the lower layer state to the upper layer state and the input event is designated to the first user interface. If the input pressure is 0.8 then the third condition is met and the state remains at the lower layer state and the input event is designated to the second user interface.
The thresholds or other conditions can be configured to help compensate for imprecise human pressure perception. For example, if the second condition has a threshold (e.g., 0.9) higher than the third condition's (e.g., 0.3), then the effect is that once the user has provided sufficient pressure to move the state to the lower layer, less pressure (if any, in the case of zero) is needed for the user's input to stay associated with the lower layer. This approach of using different thresholds to respectively enter and exit a state can be used for either state. Thresholds of less than zero or greater than one can be used to create a “sticky” state that only exits with a timeout or similar external signal. The state machine's state transitions can consider other factors, such as timeouts of external signals, in addition to the pressure thresholds.
At step 232, while the selection logic 150 is in a default state (e.g., a state for the first user interface unit 152), the user touches the sensing surface 122, which generates a pressure point that is handled by the selection logic 150. The pressure of the pressure point is evaluated and found to satisfy the first pressure condition 204, which transitions the state of the state machine from the upper layer state 200 to the upper layer state 200 (no state change), i.e., the pressure point is associated with the first user interface unit 152. The user's finger traces the touch stroke 230 while continuing to satisfy the first pressure condition 204. As a result, the selection logic 150 directs the corresponding touch events (pressure points) to the first user interface unit 152. In section B, while the input pressure initially remains below the first/second pressure condition 204/206 (e.g., 0.3), corresponding first pressure points 230A are directed to the first user interface unit 152.
At step 234, the pressure is increased and, while the state machine is in the upper layer state 200, a corresponding pressure point is evaluated at step 234A and found to satisfy the first/second pressure condition 204/206. Consequently, the selection logic 150 transitions its state to the lower layer state 202, which selects the second user interface unit 154 and causes subsequent second pressure points 230B to be directed to the second user interface unit 154. Depending on particulars of the pressure conditions, it is possible that, once in the lower layer state 202, the pressure can go below the pressure required to enter the state and yet the state remains in the lower layer state 202.
At step 236 the user has increased the pressure of the touch stroke 230 to the point where a pressure point is determined, at step 236A, to satisfy the third/fourth pressure condition 208/210. This causes the selection logic 150 to transition to the upper layer state 200 which selects the first user interface unit 152 as the current target user interface. Third pressure points 230C of the touch stroke are then directed to the first user interface unit 152 for possible handling thereby.
The selection logic 150 may perform other user interface related actions in conjunction with state changes. For example, at step 236, the selection logic 150 may invoke feedback to signal to the user that a state change has occurred. Feedback might be haptic, visual (e.g., a screen flash), and/or audio (e.g., a “click” sound). In addition, the selection logic 150 might modify or augment the stream of input events being generated by the touch stroke 230. For example, at step 236 the selection logic 150 might cause the input events to include known types of input events such as a “mouse button down” event, a “double tap” event, a “dwell event”, a “pointer up/down” event, a “click” event, a “long click” event, a “focus changed” event, a variety of action events, etc. For example, if haptic feedback and a “click” event 238 are generated at step 236 then this can simulate the appearance and effect of clicking a mechanical touch pad (as commonly found on laptop computers), a mouse button, or other input devices.
Another state-driven function of the selection logic 150 may be ignoring or deleting pressure points under certain conditions. For example, in one embodiment, the selection logic 150 might have a terminal state where a transition from the lower layer state 202 to the terminal state causes the selection logic 150 to take additional steps such as ignoring additional touch inputs for a period of time, etc.
In another embodiment, the lower layer state 202 might itself be a terminal state with no exit conditions. For example, when the lower layer state 202 is entered, the selection logic 150 may remain in the lower layer state 202 until a threshold inactivity period expires. A bounding box might be established around a point of the touch stroke 230 associated with a state transition and input in that bounding box might be automatically directed to a corresponding user interface until a period of inactivity within the bounding box occurs.
The selection logic 150 can also be implemented to generate graphics. For example, consider a case where the sensing surface 122 is being used to simulate a pointer device such as a mouse. One state (or transition-stage combination) can be used to trigger display of an inert pointer on one of the user interface units 152/154. If the first user interface 150 is a first display and the second user interface is a second display, the selection logic can issue instructions for a pointer graphic to be displayed on the second display. If the second user interface or display is capable of handling pointer-style input events (e.g., mouse, touch, generic pointer), then the pointer graphic can be generated by transforming corresponding pressure points into pointer-move events, which can allow associated software to respond to pointer-over or pointer-hover conditions. If the second user interface or display is incapable of (or not in a state for) handling the pointer-style input events then the selection logic 150, through the operating system, window manager, etc., can cause an inert graphic, such as a phantom finger, to be displayed on the second user interface or display, thus allowing the user to understand how their touch input currently physically correlates with the second user interface or display. When the user's input reaches a sufficient pressure, then the pressure points may be transformed or passed through as needed. Thus, a scenario can be implemented where a user (i) inputs inert first touch inputs at a first pressure level on a first display to move a graphic indicator on a second display, and (ii) inputs active second touch inputs at a second pressure level and, due to the indicator, knows where the active second touch inputs will take effect.
Initially, in
When the user touches the sensing surface 122 with pressure above (or below) a threshold (second touch input 312), the pressure selection logic 150 takes action to cause the second touch input 312 to associate with the second user interface unit 154 and/or the second display 104. The lower-pressure first touch input 310 is represented by dashed lines on the first user interface unit 152 and the second user interface unit 154. The higher-pressure second touch input 312 is represented by a dashed line on the sensing surface 122 to signify that the input occurs on the first display 102 but does not act on the second user interface unit 154. A similar line 316 on the second user interface unit 154 shows the path of the pointer graphic 314 according to the first touch input 310. The higher-pressure second touch input 312 is represented by a solid line 318 on the second user interface unit 154 to signify that the second touch input 312 operates on the second display/UI.
If the first touch input 310 begins being inputted with pressure above the threshold, then the first touch input 310 would begin to immediately associated with the second user interface unit 154. Similarly, if the second touch input 312 does not exceed the threshold then the second touch input would associate with the first user interface unit 152 instead of the second user interface unit 154. Moreover, other types of inputs besides strokes may be used. The inputs may be merely dwells at a same input point but with different pressure; i.e. dwell inputs/events might be directed to the first user interface unit 152 until the dwelling input point increases to sufficient pressure to associate with the second user interface unit 154. The inputs might also be taps or gestures that include a pressure component; a first low-pressure tap is directed to the first user interface unit 152 and a second higher-pressure tap is directed to the second user interface unit 154.
In another embodiment, the user is able to control how input is handled in combination with gestures. That is, gestures may have a pressure component. Gestures meeting a first pressure condition (e.g., initial pressure, average pressure, etc.) may be directed to the first user interface and gestures meeting a second pressure condition may be directed to the second user interface. Multi-finger embodiments can also be implemented. Multi-finger inputs can entail either multiple simultaneous pointer events (e.g. tapping with two fingers) or a multi-finger gesture (e.g. a pinch or two-finger swipe). While the preceding paragraphs all relate to interactions that parallel traditional mouse UI, extension to multi-finger interactions allows a user to play games (slicing multiple fruit in a popular fruit-slicing game) or perform other more advanced interactions on the external display while providing pressure-sensitive input on the device.
The user interface unit 154 of
As shown in
The example of
Many variations are possible. Of note is the notion of using pressure as a means of enabling a user to control how touch inputs are to be handled when touch inputs have the potential to affect multiple user interfaces, such as when one pressure sensing surface is concurrently available to provide input to two different targets such as: two displays, two overlapping user interfaces, global or shell gestures and application-specific gestures, and others.
Moreover, the pressure selection techniques described herein can be used to select different interaction modalities or interaction models. As noted above, measures of input pressure can be used to alter or augment input event streams. If an application is configured only for one form of pointer input, such as mouse-type input, then pressure can be used to select an input mode where touch input events are translated into mouse input events to simulate use of a mouse. Although embodiments are described above as involving selection of a user interface using pressure, the same pressure-based selection techniques can be used to select input modes or interaction models.
In some embodiments, it may be helpful to evaluate only the initial pressure of an input against a pressure condition. When a stroke, swipe, tap, dwell, or combination thereof is initiated, the initial pressure may be evaluated to determine which user interface the entire input will be directed to. If a tap is evaluated, the average pressure for the first 10 milliseconds might serve as the evaluation condition, and any subsequent input from the same touch, stroke, etc., is all directed to the same target.
While thresholds have been mentioned as types of pressure conditions, time-based conditions may also be used. The rate of pressure change, for instance, can be used. Also, pressure conditions can be implemented as a pressure function, where pressure measured as a function of time is compared to values of a time-based pressure function, pattern, or profile.
Because touch inputs might be inputted on one device and displayed on another device, a user may in a sense be operating the input device without looking at the input device. To help the user perceive where a touch point is moving, haptic feedback can be used based on the touch point encountering objects. For example, if a touch input is moved logically over the edge of a graphic object, haptic feedback can be triggered by the intersection of the re-directed touch input and the graphic object, thus giving the user a sense of touching the edge of the object. The same approach can be useful for perceiving the boundaries of the target user interface. If only a certain area of the sensing surface is mapped to the target user interface, then haptic feedback can be triggered when a touch point reaches the edge of that area, thus informing the user. This haptic feedback technique can be particularly useful during drag-and-drop operations to let the user know when a potential drop target has been reached. Preferably, haptic feedback is used in combination with visual feedback shown on the external display (at which the user is presumably looking).
The computing device 350 may have one or more displays 102/104, a network interface 354 (or several), as well as storage hardware 356 and processing hardware 358, which may be a combination of any one or more: central processing units, graphics processing units, analog-to-digital converters, bus chips, FPGAs, ASICs, Application-specific Standard Products (ASSPs), or Complex Programmable Logic Devices (CPLDs), etc. The storage hardware 356 may be any combination of magnetic storage, static memory, volatile memory, non-volatile memory, optically or magnetically readable matter, etc. The meaning of the term “storage”, as used herein does not refer to signals or energy per se, but rather refers to physical apparatuses and states of matter. The hardware elements of the computing device 350 may cooperate in ways well understood in the art of computing. In addition, input devices may be integrated with or in communication with the computing device 350. The computing device 300 may have any form-factor or may be used in any type of encompassing device. The computing device 350 may be in the form of a handheld device such as a smartphone, a tablet computer, a gaming device, a server, a rack-mounted or backplaned computer-on-a-board, a system-on-a-chip, or others.
Embodiments and features discussed above can be realized in the form of information stored in volatile or non-volatile computer or device readable storage hardware. This is deemed to include at least hardware such as optical storage (e.g., compact-disk read-only memory (CD-ROM)), magnetic media, flash read-only memory (ROM), or any current or means of storing digital information. The stored information can be in the form of machine executable instructions (e.g., compiled executable binary code), source code, bytecode, or any other information that can be used to enable or configure computing devices to perform the various embodiments discussed above. This is also deemed to include at least volatile memory such as random-access memory (RAM) and/or virtual memory storing information such as central processing unit (CPU) instructions during execution of a program carrying out an embodiment, as well as non-volatile media storing information that allows a program or executable to be loaded and executed. The embodiments and features can be performed on any type of computing device, including portable devices, workstations, servers, mobile wireless devices, and so on.