The present invention relates to controlling a touch screen user interface.
With the popularization of mobile devices, the touch or multi-touch interface is becoming more popular. Hence, there is a need to improve the ways to control touch or multi-touch interface in an effective manner.
A touch device is typically referred to as a device whose control is realized, but not limited to, by giving commands or selecting objects or the like that are displayed on a screen by the means of touching that screen (i.e. touch commands or touch gestures). A multi-touch device is typically referred to as a touch device that can accept and process two or more of the touch commands at once. A touch point is typically referred to as a point of touching of a touch device with a finger, a stylus, a capacitive pen or the like. Adding a touch point is typically referred to as the action of placing the finger, stylus, capacitive pen or the like on the touch device so that it touches its surface. Breaking a touch point is typically referred to as the action of moving the touching finger, stylus, capacity pen or the like away from the touch device so that it does not touch the screen of the device any more.
The known devices provide a variety of commands that allow for interaction with the objects displayed on a multi-touch screen. These include the following: a tap (touching the screen for a short time only), a two-finger tap (touching the screen for a short time only in two points simultaneously), a drag (touching an object on the screen, keeping the touch and moving the touch point to another location), a pinch (touching the screen at two places, keeping the touch and then moving the two touch points closer to each other), a zoom or a pinch out (touching the screen at two places, keeping the touch and then moving the two touch points further apart from each other), a flick (touching the screen and immediately moving the touch point sidewards).
Each of these commands allows manipulating objects displayed on the screen, but by performing only one action at a time. If the command is interrupted, it may leave the user interface handling this command in an undefined state. It is clear to one skilled in the art that e.g. dragging is a particularly long process that involves locating the object on screen, touching and holding it and then placing at the target position without breaking the touch point. Furthermore, locating of the object may be difficult for the user due to a small size of the screen or the object or other quality inherent to the object like animation, color or the like. Hence, in a situation where an object is being dragged on the screen, it may happen that the target location may e.g. be not visible on the screen. In such case the drag command needs to be interrupted before being finished, the view of the screen needs to be adjusted by the user and the action needs to be restarted.
For example, a music player application on a smartphone may comprise a play list containing a multitude of music pieces. The user of the player application may typically drag one of the pieces of music to a different position by using a drag command. However, during the drag, the user may notice that the piece being dragged should belong to a different list than the presently displayed list. In order to move that piece to the other list, the user needs to lift the finger to interrupt the drag. This typically poses one of two results. Either the dragged item is dropped at the present position (which is not the intended target position) or the item is reverted to its original position. Both states are undesired from the point of view of the user. As a consequence, the user needs to locate the item again, access its context options and perform further manipulations to move the piece to a desired list.
There are known some alternative solutions to the aforementioned situation.
A US patent application US20100162181 presents a method to change parameters of a multi-touch command by interpreting the introduction or removal of an additional touch point while the command is still in progress. When applied to the music player application as described above, it could allow quickening the scrolling speed by adding or removing a second touch point during the dragging motion. However, such solution requires certain fluency and dexterity in handling the command in order to achieve the desired effects.
There is a need to provide an alternative touch interface that would at least partially eliminate the drawbacks of the present touch interfaces, by introducing a more static solution that requires less fluency or dexterity in handling multi-touch commands.
The method according to the present invention allows combining two or more component touch commands into a single combined command to allow increased functionality to the user of the multi-touch device, by presenting the user with a context menu and allowing selecting of a desired action from the menu.
The object of the invention is a method for controlling a touch screen user interface, comprising the steps of: receiving an indication that a touch input has been initiated via the user interface; monitoring the touch input until the touch is paused, broken or a new touch point is added to recognize a first component command; continuing to monitor further touch input to recognize a second component command; determining a combined command related to the first component command and the second component command; and performing an action associated with the determined combined command.
Preferably, the action associated with the determined combined command is selected as a function of at least one variables selected from the group consisting of: the position of the touch point at the beginning, end or during the first or second touch command; the relative position of any two or more touch points at the beginning, end or during the first or second touch command; the context of an application executed at the device.
Preferably, the pausing of a touch or the breaking of a touch point or addition of a new touch point are considered as initiation of the second component command and form part of the second component command.
Preferably, the method further comprises, after performing the action associated with the determined combined command, receiving an indication that a further touch input has been initiated via the user interface and monitoring the further touch input until the touch is paused, broken or a new touch point is added.
Preferably, the first component command is a single touch command.
Preferably, the first component command is a multi-touch command.
Preferably, the second component command is a single touch command.
Preferably, the second component command is a multi-touch command.
The object of the invention is also a computer program comprising program code means for performing all the steps of the method described above, as well as a computer readable medium storing computer-executable instructions performing all the steps of the method as described above.
The object of the invention is also a device comprising a touch screen user interface operable by a user interface controller, wherein the user interface controller is configured to execute all the steps of the method as described above.
Further details and features of the present invention, its nature and various advantages will become more apparent from the following detailed description of the to preferred embodiments shown in a drawing in which:
In case when any of the events of steps 102-104 has been detected, the method perceives further input from the user as an initiation of a second component command in step 105. The input of the second command is then monitored in step 106 until it is detected that the command is finished in step 107. The command is considered as finished in step 107 upon detecting an event that completes a known definition of a second component command, e.g. a drag command can be recognized as touching a touch point, executed by holding the touch while moving the touch point and finished by breaking the touch point. Therefore, in steps 105-107 the further input from the user, following the completion of the first component command in one of steps 102-104, is monitored in steps 105-107 until a second component command is recognized in step 107.
After the entry of the second component command is finished, a combined command is determined in step 109 that corresponds to the succession of the first component command and the second component command.
If a combined command is detected, an action is performed that is related to that combined command in step 109. For example, a context menu can be displayed to the user in step 109, configured to present a selection of actions regarding the present state of the device at step 109. The present state of the device can be determined as a function of the following variables:
Preferably, the context menu is displayed at the point at which the second command has been finished, but it can be also displayed in other screen positions, such as at a predefined position or at a predefined distance from the point at which the second command has been finished.
Therefore, the method of
In other embodiments, the method may be extended by receiving more than two component commands, by returning to step 100 after step 104 (line 110).
The present invention allows improving a touch user interface with a combined command comprising drag and zoom component commands, as shown in
It is possible to allow the user to make further actions after step 6 of
For example, the user may lift the second finger while still holding the first finger. This allows the user to select an option from the context menu by tapping it with the second finger (wherein the tap command would be recognized by repeating steps 100-104 of
In another example, as shown in
The actions related to the zoom feature can be adjusted to the parameters of the zoom, e.g. the direction of the zoom.
For example, for a vertical zoom command, actions such as shown in
Therefore, in the example of
Consequently, the application can perform a different action than a display of the aforementioned context menu as a consequence of completion of the second component command. For example, referring to
Exemplary action may comprise immediate move of a dragged element to a predefined list. Another exemplary reaction to the completion of the aforementioned step 6 of
Another exemplary reaction of the application to the completion of step 6 of
Another exemplary reaction of the application to the completion of step 6 of
The method presented herein allows advancing of the process of manipulation of objects displayed by and controlled with a multi-touch screen device by eliminating redundant actions in cases when either the conditions of the view change in the process of a dragging command or the user decides upon other actions then originally intended. The method is particularly useful in user interfaces where it is difficult to find the object due to a large number of similar objects or in applications where the selection of the object is difficult due to e.g. a small size of the object.
The device can be any type of a device comprising a touch screen, such as a personal computer, a smartphone, a tablet, a laptop, a television set etc.
It can be easily recognized, by one skilled in the art, that the aforementioned method for controlling the touch screen user interface may be performed and/or controlled by one or more computer programs. Such computer programs are typically executed by utilizing the computing resources of the device. The computer programs can be stored in a non-volatile memory, for example a flash memory or in a volatile memory, for example RAM and are executed by the processing unit. These memories are exemplary recording media for storing computer programs comprising is computer-executable instructions performing all the steps of the computer-implemented method according the technical concept presented herein.
While the invention presented herein has been depicted, described, and has been defined with reference to particular preferred embodiments, such references and examples of implementation in the foregoing specification do not imply any limitation on the invention. It will, however, be evident that various modifications and changes may be made thereto without departing from the broader scope of the technical concept. The presented preferred embodiments are exemplary only, and are not exhaustive of the scope of the technical concept presented herein.
Accordingly, the scope of protection is not limited to the preferred embodiments described in the specification, but is only limited by the claims that follow.
| Number | Date | Country | Kind |
|---|---|---|---|
| 15180461.4 | Aug 2015 | EP | regional |