A user is able to control actions and elements of an application or applications on a graphical user interface using a touch sensitive input device. The touch sensitive input device may refer, for example, to a touch pad used in laptop computers to a touch sensitive display. When a selection of an item or element on the graphical user interface need to be made from a plurality of alternatives, an easy and intuitive way of selection is desirable.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
In one embodiment, an apparatus is provided. The apparatus comprises at least one processing unit, at least one memory, a pressure level sensitive user input device and a graphical user interface. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, map the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, detect release of the pressure on the pressure level sensitive user input device, and switch to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
In another embodiment, a method is provided. The method comprises detecting that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, mapping the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, detecting release of the pressure on the pressure level sensitive user input device, and switching to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
In one embodiment, an apparatus is provided. The apparatus comprises at least one processing unit, at least one memory, a pressure level sensitive user input device and a graphical user interface. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, map the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, provide an indication to a user, the indication indicating the interface interaction layer mapped to the pressure level, wherein the indication comprises at least one of a visual indication, a tactile indication and a vocal indication, detect release of the pressure on the pressure level sensitive user input device, and switch to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
Many of the attendant features will be more readily appreciated as they become better understood by reference to the following detailed description considered in connection with the accompanying drawings.
The present description will be better understood from the following detailed description read in light of the accompanying drawings, wherein:
Like reference numerals are used to designate like parts in the accompanying drawings.
The detailed description provided below in connection with the appended drawings is intended as a description of the present examples and is not intended to represent the only forms in which the present example may be constructed or utilized. However, the same or equivalent functions and sequences may be accomplished by different examples. Furthermore, as used in this application and in the claims, the singular forms “a,” “an,” and “the” include the plural forms unless the context clearly dictates otherwise. Additionally, the term “includes” means “comprises.” Further, the term “coupled” encompasses mechanical, electrical, magnetic, optical, as well as other practical ways of coupling or linking items together, and does not exclude the presence of intermediate elements between the coupled items.
The illustrated apparatus 100 can include a controller or processor 102 (e.g., signal processor, microprocessor, ASIC, or other control and processing logic circuitry) for performing such tasks as signal coding, data processing, input/output processing, power control, and/or other functions. An operating system 104 can control the allocation and usage of the components 138 and support for one or more application programs 106. The application programs can include common computing applications (e.g., email applications, calendars, contact managers, web browsers, messaging applications), or any other computing application.
The illustrated apparatus 100 can include a memory 106. The memory 106 can include non-removable memory 108 and/or removable memory 110. The non-removable memory 108 can include RAM, ROM, flash memory, a hard disk, or other well-known memory storage technologies. The removable memory 110 can include flash memory or a Subscriber Identity Module (SIM) card, which is well known in GSM communication systems, or other well-known memory storage technologies, such as “smart cards.” The memory 106 can be used for storing data and/or code for running the operating system 104 and the applications 106. Example data can include web pages, text, images, sound files, video data, or other data sets to be sent to and/or received from one or more network servers or other devices via one or more wired or wireless networks. The memory 106 can be used to store a subscriber identifier, such as an International Mobile Subscriber Identity (IMSI), and an equipment identifier, such as an International Mobile Equipment Identifier (IMEI). Such identifiers can be transmitted to a network server to identify users and equipment.
The apparatus 100 can support one or more input devices 112, such as a touchscreen 114, microphone 116, camera 118 and/or physical keys or a keyboard 120 and one or more output devices 122, such as a speaker 124 and a display 126. Other possible output devices (not shown) can include piezoelectric or other haptic output devices. Some devices can serve more than one input/output function. For example, the touchscreen 114 and the display 126 can be combined in a single input/output device. The input devices 112 can include a Natural User Interface (NUI). An NUI is any interface technology that enables a user to interact with a device in a “natural” manner, free from artificial constraints imposed by input devices such as mice, keyboards, remote controls, and the like. Examples of NUI methods include those relying on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, and machine intelligence. Other examples of a NUI include motion gesture detection using accelerometers/gyroscopes, facial recognition, 3D displays, head, eye, and gaze tracking, immersive augmented reality and virtual reality systems, all of which provide a more natural interface, as well as technologies for sensing brain activity using electric field sensing electrodes (EEG and related methods). Thus, in one specific example, the operating system 104 or applications 106 can comprise speech-recognition software as part of a voice user interface that allows a user to operate the apparatus 100 via voice commands. Further, the apparatus 100 can comprise input devices and software that allows for user interaction via a user's spatial gestures, such as detecting and interpreting gestures to provide input to a gaming application.
A wireless modem 128 can be coupled to an antenna (not shown) and can support two-way communications between the processor 102 and external devices, as is well understood in the art. The modem 128 is shown generically and can include a cellular modem for communicating with a mobile communication network and/or other radio-based modems (e.g., Bluetooth or Wi-Fi). The wireless modem 128 is typically configured for communication with one or more cellular networks, such as a GSM network for data and voice communications within a single cellular network, a WCDMA (Wideband Code Division Multiple Access) network, an LTE (Long Term Evolution) network, a 4G LTE network, between cellular networks, or between the apparatus and a public switched telephone network (PSTN) etc.
The apparatus 100 can further include at least one input/output port 130, a satellite navigation system receiver 132, such as a Global Positioning System (GPS) receiver, an accelerometer 134, and/or a physical connector 136, which can be a USB port, IEEE 1394 (FireWire) port, and/or RS-232 port. The illustrated components 138 are not required or all-inclusive, as any components can deleted and other components can be added.
In
Once the force touch interaction layer has been entered, another interaction mode is in use. The interaction mode may be, for example, an inking or drawing mode. The user is able to draw, for example, a line 206 by moving his finger. When drawing the line, the user need not apply the greater pressure level used to enter the force touch interaction layer any more. To exit the force touch interaction layer, the user may again firmly press the touch-sensitive display with a pressure level exceeding the predetermined pressure level and then release the applied pressure level, and the apparatus 200 switches back the touch interaction layer.
The interaction layer switching as discussed above provides an easy and intuitive way to switch between interaction layers using a pressure level that exceeds the pressure level of a normal touch on the touch-sensitive display. This also enables a more efficient user-experience since the user does not have to select the interaction layer from any menus.
The top of the view 300 may comprise one or more browser-specific general menu items 302. Under the menu items 302, four tabs 304-310 are illustrated. Each tab may comprise a different currently open web page. The horizontal lines in TAB1304 indicate that this tab is currently an active tab in the view 300. In this embodiment, the tabs 304-310 are regarded as interaction layers.
Normally, a user would select the desired tab by touching the tab with his finger(s) or a stylus. However, in this embodiment, an item 312 in the view 300 illustrates that the user firmly presses the touch sensitive display. The term “firmly” may mean that a predetermined pressure level is exceeded. A normal touch on the touch-sensitive display by the user to control normal operations on the view 300 does not yet exceed the predetermined pressure level but a higher pressure level is needed. The predetermined pressure level may be configured automatically or alternatively it may be user-configurable.
In one embodiment, if the user wanted to select tab 310, the user is able to make the selecting by applying a pressure level that maps to the tab 310 on the touch-sensitive display, When this pressure level is reached and the user releases the pressure applied on the touch sensitive display, this is interpreted as a selection of the tab 310. There may be a different pressure level mapped with each tab 304-310, and the selection of a desired tab is made by applying a correct amount of pressure on the touch-sensitive display.
In a touch interaction mode, a user is able, for example, to scroll the view 402. An item 406 in the view 402 illustrates that the user firmly presses a touch sensitive display of the apparatus 400. The term “firmly” may mean that a predetermined pressure level is exceeded. A normal touch on the touch sensitive display by the user to control normal operations on the view 402 does not yet exceed the predetermined pressure level but a higher pressure level is needed. The predetermined pressure level may be configured automatically or alternatively it may be user-configurable.
The application may be any application that provides various tools for a user, for example, a drawing application or an image processing application. The view 500 provides three tool items 502, 504, 506 that the user is able to select. The small black rectangle in each tool item 502, 504, 506 means that there are two or more sub-tool items relating to each tool item 502, 504, 506.
In a normal situation, the user would select one of the tools, for example, by touching the tool longer. In this case, the tool 506 would be selected. This may expand the tool item 506 to show all the related sub-tool items 508, 510, 512 from which the user is able to select the desired sub-tool via a touch. However, another possibility for the user to select a desired sub-tool item is to first touch the tool item 506 using a pressure level that exceeds a normally used touch pressure level.
In one embodiment, each sub-tool item 508, 510, 512 has been linked with a different predetermined pressure level. Let's assume that the user uses a moderate pressure level which associates to the sub-tool item 508. The view 500 may provide a visual indication that the current pressure level associates to the sub-tool item 508, for example, by shading the sub-tool item 508. If the user then releases his touch on the touch-sensitive display, this is detected by the apparatus and the apparatus interprets this as a selection of the sub-tool item 508. If the user applies more pressure on the touch sensitive display, new sub-tools items 510, 512 may associate to the increased pressure level. This means that the user may choose a desired sub-tool item 508, 510, 512 by applying a correct amount of pressure on the touch-sensitive display.
In another embodiment, the user may use a single pressure level exceeding a normally used touch pressure level to select any of the sub-tool items 508, 510, 512. When the user first starts to apply the pressure level exceeding a normally used touch pressure level, the sub-tool item 508 is first visually indicated, for example, by shading, as a preselected sub-tool item. When the user keeps the same pressure level for a predetermined period of time, the preselected sub-tool item changes to the next sub-tool item, and this may be indicated visually to the user. When the user then releases his touch on the touch-sensitive display, the sub-tool item that was the most recent preselected sub-tool item is considered as a selected sub-tool item.
The view 600 shows all user applications in a cascaded manner currently executing in the apparatus. Before the user is given the cascaded application view, the user may have applied with his finger a pressure level that exceeds a normally used touch pressure level on the touch-sensitive display. This may be interpreted by the apparatus as a desire to change the currently active application displayed on the touch sensitive display. The application which was the active application before the user applied the pressure level that exceeds a normally used touch pressure level on the touch-sensitive display may be shown as the first application 602 in the cascaded application view.
When the user increases the pressure level applied on the touch-sensitive display, the order of the applications 602, 604, 606, 608 may change, and the application 604 may become as the first application in the cascaded view, as illustrated in
In one embodiment of any of
In
A pressure level 700 illustrates a normal touch pressure level on the touch-sensitive display. In addition to the normal pressure level 700, there are four other pressure levels 702, 704, 706, 708 illustrated in
As discussed above, time may be used as an additional parameter for determining the interaction layer mapped to the pressure level 712. The time period between the time points (for example, the time period between time points 714 and 716) may be user configurable. Alternatively, it may be configured automatically by the apparatus to be a default time period. When time is used as additional parameter in addition to a single predetermined pressure level, the user does not have to vary the pressure level applied on the touch-sensitive display. Instead, it is sufficient that the applied pressure level exceeds the predetermined pressure level, and a timer is used to initiate the switch between the interaction layers.
In 800 a pressure level applied on a pressure level sensitive user input device is detected. The pressure level sensitive user input device may refer, for example, to a touch sensitive display that is able to detect a pressure level applied on the display or to a touch pad used on control an apparatus.
In 802 the pressure level is mapped to an interaction layer of a set of interaction layers provided by the graphical user interface. The term “interaction layer” may refer to any application, application item or other item on the graphical user interface or to an interaction mode, which can be selected by the user.
In 804 release of the pressure on the pressure level sensitive user input device is detected.
In 806 the interaction layer mapped to the pressure level is switched to in response to detecting the release of the pressure level on the pressure level sensitive user input device.
At least some of the embodiments provide one more of the following effects. A solution is provided that enables more efficient use of a user interface and leads also to improved user-experience. The user is able to easily and intuitively switch between interaction layers or select an interaction layer by applying one or more predetermined pressure levels.
According to an aspect, there is provided an apparatus comprising at least one processing unit, at least one memory, a pressure level sensitive user input device, and a graphical user interface. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, map the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, provide an indication to a user, the indication indicating the interaction layer mapped to the pressure level, wherein the indication comprises at least one of a visual indication, a tactile indication and a vocal indication, detect release of the pressure on the pressure level sensitive user input device, and switch to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
According to another aspect, there is provided an apparatus comprising at least one processing unit, at least one processing unit, at least one memory, a pressure level sensitive user input device, and a graphical user interface. The at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, map the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, detect release of the pressure on the pressure level sensitive user input device, and switch to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
In one embodiment, each interaction layer is associated with a different pressure level.
In one embodiment, alternatively or in addition, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to detect an increase in the pressure level applied on the pressure level sensitive user input device, and map the increased pressure level to another interaction layer of the set of interaction layers provided by the graphical user interface.
In one embodiment, alternatively or in addition, a single pressure level is associated with all interaction layers in the set of interaction layers.
In one embodiment, alternatively or in addition, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to start a timer after mapping the pressure level to the interaction layer of the set of interaction layers, detect expiration of the timer, and map the pressure level to a subsequent interaction layer of the set of interaction layers in response to detecting the expiration of the timer,
In one embodiment, alternatively or in addition, the at least one memory stores program instructions that, when executed by the at least one processing unit, cause the apparatus to provide an indication to a user, the indication indicating the interaction layer mapped to the pressure level, wherein the indication comprises at least one of a visual indication, a tactile indication and a vocal indication.
In one embodiment, alternatively or in addition, the set of interaction layers comprise tabs within a single application.
In one embodiment, alternatively or in addition, the set of interaction layers comprise active applications accessible via the graphical user interface.
In one embodiment, alternatively or in addition, the set of interaction layers comprise sub-items of a graphical user interface item.
In one embodiment, alternatively or in addition, the set of interaction layers are application specific.
In one embodiment, alternatively or in addition, the predetermined pressure level is user-configurable.
In one embodiment, alternatively or in addition, the predetermined pressure level is configured automatically.
According to an aspect, there is provided a method comprising detecting that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, mapping the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, detecting release of the pressure on the pressure level sensitive user input device, and switching to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
In one embodiment, each interaction layer is associated with a different pressure level.
In one embodiment, alternatively or in addition, the method comprises detecting an increase in the pressure level applied on the pressure level sensitive user input device, and mapping the increased pressure level to another interaction layer of the set of interaction layers provided by the graphical user interface.
In one embodiment, alternatively or in addition, a single pressure level is associated with all interaction layers in the set of interaction layers.
In one embodiment, alternatively or in addition, the method comprises starting a timer after mapping the pressure level to the interaction layer of the set of interaction layers, detecting expiration of the timer, and mapping the pressure level to a subsequent interaction layer of the set of interaction layers in response to detecting the expiration of the timer,
In one embodiment, alternatively or in addition, the method comprises providing an indication to a user, the indication indicating the interaction layer mapped to the pressure level, wherein the indication comprises at least one of a visual indication, a tactile indication and a vocal indication.
In one embodiment, alternatively or in addition, the set of interaction layers comprise tabs within a single application.
In one embodiment, alternatively or in addition, the set of interaction layers comprise active applications accessible via the graphical user interface.
In one embodiment, alternatively or in addition, the set of interaction layers comprise sub-items of a graphical user interface item.
In one embodiment, alternatively or in addition, the set of interaction layers are application specific.
In one embodiment, alternatively or in addition, the predetermined pressure level is user-configurable.
In one embodiment, alternatively or in addition, the predetermined pressure level is configured automatically.
According to another aspect, there is provided a computer program comprising program code, which when executed by at least one processor, causes an apparatus to perform detecting that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, mapping the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, detecting release of the pressure on the pressure level sensitive user input device, and switching to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
In one embodiment, the computer program is embodied on a computer-readable medium.
According to another aspect, there is provided an apparatus comprising a pressure level sensitive user input device and a graphical user interface. The apparatus further comprises means for detecting that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, means for mapping the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, means for detecting release of the pressure on the pressure level sensitive user input device, and means for switching to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
According to another aspect, there is provided an apparatus comprising a pressure level sensitive user input device and a graphical user interface. The apparatus further comprises means for detecting that a pressure level applied on the pressure level sensitive user input device exceeds a predetermined pressure level, means for mapping the pressure level to an interaction layer of a set of interaction layers provided by the graphical user interface, means for providing an indication to a user, the indication indicating the interaction layer mapped to the pressure level, wherein the indication comprises at least one of a visual indication, a tactile indication and a vocal indication, means for detecting release of the pressure on the pressure level sensitive user input device, and means for switching to the interaction layer mapped to the pressure level in response to detecting the release of the pressure level on the pressure level sensitive user input device.
Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), Graphics Processing Units (GPUs).
The functions described herein performed by a controller may be performed by software in machine readable form on a tangible storage medium e.g. in the form of a computer program comprising computer program code means adapted to perform all the steps of any of the methods described herein when the program is run on a computer and where the computer program may be embodied on a computer readable medium. Examples of tangible storage media include computer storage devices comprising computer-readable media such as disks, thumb drives, memory etc. and do not include propagated signals. Propagated signals may be present in a tangible storage media, but propagated signals per se are not examples of tangible storage media. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
Although the subject matter may have been described in language specific to structural features and/or acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as examples of implementing the claims and other equivalent features and acts are intended to be within the scope of the claims.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. The embodiments are not limited to those that solve any or all of the stated problems or those that have any or all of the stated benefits and advantages.
Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
The term ‘comprising’ is used herein to mean including the method blocks or elements identified, but that such blocks or elements do not comprise an exclusive list and a method or apparatus may contain additional blocks or elements.
It will be understood that the above description is given by way of example only and that various modifications may be made by those skilled in the art. The above specification, examples and data provide a complete description of the structure and use of exemplary embodiments. Although various embodiments have been described above with a certain degree of particularity, or with reference to one or more individual embodiments, those skilled in the art could make numerous alterations to the disclosed embodiments without departing from the spirit or scope of this specification. In particular, the individual features, elements, or parts described in the context of one example, may be connected in any combination to any other example also.