Touch-enabled platforms, such as tablets and mobile phones, are increasing in popularity. Given that it is common for a person to own a laptop computer, a tablet computer, and a smart phone, it is also common for a person to use two or more of these devices at the same time. It is easy to imagine the value of dynamically combining the resources of multiple devices. It would be useful, for example, to join the displays of two tablets placed side-by-side to act as one logical display. While such a system may provide a single logical display, it does not allow for seamless input across the devices. In particular, there is currently no mechanism available that would allow a user to start a pointing operation, such as a drag operation, on one device and then cross over to the second device to finish the drag. Lifting the user's finger or stylus off of the first device terminates any in-progress motion-based operation. Moreover, there is currently no way to determine whether the user, when he lifts his stylus, wishes to continue that drag operation on another device, or whether he wishes to end the input.
a and 5b are process flowcharts illustrating the process described herein, according to an embodiment.
In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears.
An embodiment is now described with reference to the figures, where like reference numbers indicate identical or functionally similar elements. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the description. It will be apparent to a person skilled in the relevant art that this can also be employed in a variety of other systems and applications other than what is described herein.
Disclosed herein are methods and systems to allow users to gain the advantages of a large-format touch display by using smaller, more cost-effective touch displays. Given two adjacent displays, dynamic regions may be created on both sides of the boundary between the two component displays. These regions may grow and shrink dynamically based on the user's movement, i.e. the velocity of a stylus or finger towards the boundary. If the user lifts his stylus or finger within a region on one display, he may have the opportunity to finish the tracking action on the other display by landing within the corresponding region of the latter display. This may allow a user to begin a drag operation on one touch display, drag towards another touch display, and “flyover” to the second display without slowing down to complete the drag. The unwanted lift event may be removed when the first touch display detects the stylus or finger being lifted as it moves towards the second display. The landing event on the second display may also be removed.
The system is illustrated in
Note that while interaction with the tablets 110 and 120 is shown in
In addition, the terms “lift zone” and “landing zone” may be defined in terms of the tablet on which the user interaction begins. In
Moreover, while the illustrated embodiment shows two horizontally adjacent displays, the system and methods described herein may also be applied in an analogous manner to two displays that are vertically adjacent, such that the top edge of one display abuts the lower edge of a second display.
User interaction with touch displays such as tablets 110 and 120 is illustrated in
The expansion of a lift zone and a landing zone in response to a user movement is illustrated in
In alternative embodiments, the lift zone 115 may expand as a different function of the velocity of the contact point 150. For example, the function by which the lift zone expansion and the contact point velocity are related may be non-linear; the function may be, for example, a square or an exponential function, and/or may entail scaling. These functions are intended as examples, and are not meant to be limiting.
In an embodiment, the lift zone 115 and landing zone 125 may have a minimum default size, as shown in
The processing for these operations may be illustrated as a state diagram, as shown in
By lifting the stylus within the lift zone, the system may move from state 2 to state 1′. In state 1′ the stylus is no longer contacting the screen, but the underlying input hardware may still sense its position. Since the user is in the process of moving between screens, there may be no passing of any input events to the system. Therefore, the application software may not be aware of the stylus lift. The net effect is that the cursor may appear to freeze at the point of the lift.
If the stylus tracks away from the lifting or landing zones while in state 1′, the buffered lift event may be fired, and the system may transition to state 1. The transition from state 1′ to 1 may occur due to an internal timeout. The timeout may be required to deal with situations where a user drags an object inside the front but then hovers for a period of time without moving. This behavior could occur for example, when the user finishes a movement but is resting the pen above the screen.
As the drag between displays continues from state 1′, the stylus may move up and over between the displays. As such, the stylus may move out of the sensing range of the screen and the system may enter state 0′. As above, a timeout may cause the system to transition to state 0. If the stylus reenters tracking range outside of the front, the system may move to state 1. Finally, re-entering the tracking range inside the front may return to state 1′. To transition back to state 2 from state 1′ and finish the drag, the stylus may make contact with the screen within the landing zone before the timeout is triggered (i.e., before the timeout interval concludes).
After successfully making contact in the landing zone, a move event may be created from the coordinate where the user lifted (the position of the 2-to-1′ transition) to where contact is made again (the transition 1′-to-2). The net result from the application's perspective may be that the user momentarily stopped moving during the drag on one display and then resumed movement on the second display.
In both states 0′ and 1′, the “prime” may signify that any applications listening to the input stream still believe the input device to be frozen in state 2, while the digit (0 or 1) may signify the actual state of the underlying input device. Until the user continues the dragging action on the other side of the edge, or the system times out, any software receiving events may believe that the user has simply frozen in the middle of a dragging action.
While the process as presented above assumes the ability to sense a hover state, the technique may be implemented with a two state input device. The above process may be modified by moving states 0 and 0′ to states 1 and 1′ respectively. Ensuring that the visual representation of the cursor properly responds to the user may require the use of the hover events above. However, this would not be necessary in a two-state input system, such as a system that includes a resistive touchscreen that does not use an on-screen representation of the cursor.
At 515, the contact point may enter the lift zone. At 520, a determination may be made as to whether a lift has been performed inside the lift zone. If so, then at 525 the lift event may be buffered and not otherwise processed. At 530, a timeout counter may be started.
Referring to
Methods and systems are disclosed herein with the aid of functional building blocks illustrating the functions, features, and relationships thereof. At least some of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
One or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including a computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein.
A software or firmware embodiment of the processing described above is illustrated in
Computer program logic 640 may include computer readable code that, when read and executed by processor 620, results in the processing described above with respect to
While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the exemplary embodiments disclosed herein.