1. Technical Field
This invention relates generally to the field of user interfaces. More specifically, this invention relates to improving usability of cross-device user interfaces.
2. Description of the Related Art
Remote desktop and similar technologies allow users to access the interfaces of their computing devices, such as, but not limited to, computers, phones, tablets, televisions, etc. (considered herein as a “server”) from a different device, which can also be a computer, phone, tablet, television, gaming console, etc. (considered herein as a “client”). Such communication between the devices may be referred to herein as “remote access” or “remote control” regardless of the actual distance between devices. With remote access or remote control, such devices can be connected either directly or over a local or wide area network, for example.
Remote access requires the client to pass user interaction events, such as mouse clicks, key strokes, touch, etc., to the server. The server subsequently returns the user interface images or video back to the client, which then displays the returned images or video to the user.
It should be appreciated that the input methods that a client makes available to the user may be different from those assumed by the server. For example, when the client is a touch tablet and the server is a PC with a keyboard and a mouse, the input methods of touch tablet may be different from those of a PC with a keyboard and a mouse.
Mechanisms are provided that improve the usability of remote access between different devices or with different platforms by predicting user intent and, based in part on the prediction, offering the user appropriate interface tools or modifying the present interface accordingly. Mechanisms for creating and using gesture maps that improve usability between cross-device user interfaces are also provided.
Mechanisms are provided that improve the usability of remote access between different devices or with different platforms by predicting user intent and, based in part on the prediction, offering the user appropriate interface tools or modifying the present interface accordingly. Also provided are mechanisms for creating and using gesture maps that improve usability between cross-device user interfaces.
One or more embodiments can be understood with reference to
One or more embodiments are discussed hereinbelow in which the need for a keyboard is predicted.
An embodiment for predicting the need for a keyboard can be understood by the following example situation, as follows: a user with a touch tablet, such as an iPad® (“client”), remotely accesses a computer (“server”) and uses a web browser on that computer. In this example, when the user taps on the address bar of the image of the computer browser, as displayed on the tablet, the user expects to enter the URL address. This action or input requires the tablet client to display a software keyboard to take user's input of the URL address.
However, normally, the client is not aware what the user tapped on as the actual web browser software is running on the server, not the client.
In one or more embodiments, the user intent may be predicted in several ways and such prediction may be used to bring up the software keyboard on the client. Such ways include but are not limited to the following techniques, used together or separately:
One or more embodiments are discussed hereinbelow in which the need for magnification is predicted.
An embodiment for predicting need for magnification can be understood by the following example situation. In such example, a user is using a touch phone, such as an iPhone® by Apple Inc. (“client”) that has a small screen and remotely accesses a computer (“server”) that has a large or high resolution screen. The user commonly needs to close/minimize/maximize windows displayed by the server operating system, such as Windows or Mac OS. However, on the small client screen, those controls may be very small and hard to “hit” with touch.
One or more embodiments provided and discussed herein predict an intention of a user to magnify an image or window as follows. Such embodiments may include but are not limited to the following techniques, used together or separately:
One or more embodiments are discussed hereinbelow in which the need for active scrolling is predicted.
An embodiment for predicting need for active scrolling can be understood by the following example situation: a user with a touch phone, such as an iPhone®, (“client”) that has a small screen remotely accesses a computer (“server”) that has a large or high resolution screen and uses a web browser or an office application on that computer. As is common, a user may need to scroll within the window of such application. However, on the small client screen, the scroll bar and scrolling controls may be small and hard to hit or grab with touch.
One or more embodiments provide improvements, such as but not limited to the following techniques, which can be used together or separately:
One or more embodiments are discussed hereinbelow in which the need for passive scrolling is predicted.
An embodiment for predicting need for passive scrolling can be understood by the following example situation: a user with a touch tablet, such as an iPad® (“client”), remotely accesses a computer (“server”), and uses an office application, such as Microsoft® Word® to type. Presently, when the location of the input happens to be in the lower half of the screen, that area of the client screen may be occupied by the software keyboard overlaid by the client on the image of the server screen.
One or more embodiments herein provide improvements that may include the following techniques, which may be used together or separately:
One or more embodiments are discussed hereinbelow in which the need for window resizing is predicted.
An embodiment for predicting need for window resizing can be understood by the following example situation: a user with a touch phone, such as an iPhone® (“client”) that has a small screen, remotely accesses a computer (“server”) that has a large or high resolution screen and uses a variety of window-based applications. As is common, the user may need to resize application windows. However, on the small client screen with touch, grabbing the edges or corners of the windows may be difficult.
Thus, one or more embodiments provide improvements to the above-described situation, but not limited to such situation, may include the following techniques, which may be used together or separately:
One or more embodiments are discussed hereinbelow in which application-specific control needs are predicted.
An embodiment for predicting application-specific control needs can be understood by the following example situation, which has similarities to Method C1 hereinabove, predicting the need for active scrolling described: there are application-specific controls, such as the browser back and forward navigation buttons, the usability of which may be improved by one or several of the following:
One or more embodiments are discussed hereinbelow for modifying input behavior based on client peripherals.
In an embodiment, it is considered that the usability improvements described above may or may not be necessary depending on the peripheral devices attached to the client. Such peripherals may include, but are not be limited to:
In an embodiment, the logic used to enable or disable the usability improvement methods described above incorporates conditional execution depending on the peripherals in use.
Thus, for example, if an external keyboard is attached to the client device, then the user may not need to use the keyboard on the display on the client, but, rather, use the external keyboard for typing text. Thus, in this example and for the purposes of understanding, conditional execution depending on the peripheral, i.e. keyboard, in use may include tapping on the “enable keyboard input” icon results in displaying an on-screen keyboard on the client device screen when there is no external keyboard peripheral connected to the client device. Whereas when there is a Bluetooth or other keyboard peripheral paired or connected to the client device, then tapping on the “enable keyboard input” icon would not display an on-screen keyboard, but rather just enable the peripheral keyboard's input to be sent in the control stream from client to Streamer.
One or more embodiments are discussed for offering customizable controls via a developer Application Programming Interface (API) or a Software Development Kit (SDK).
One embodiment for improving usability of application interfaces delivered over a remote desktop connection allows third party developers to add their own custom user interface controls elements, such as virtual buttons or virtual joysticks, by using one or more APIs or an SDK made available by the remote desktop vendor to such third party developers. Thus, by programming such one or more APIs or SDK, developers may be allowed to add pre-defined or custom controls, such as but not limited to screen overlays or by integrating such controls within the application. Thus, an end user may have immediate access to such virtual control, such as button or joystick.
As well, by programming such one or more APIs or SDK, the remote desktop client may be notified that a particular type of controller is needed depending on the logic of the third party application. For example, an end user may bring up or invoke a virtual keyboard where some of the keys, e.g. F1-F12, are dedicated keys that are programmed by the third party to perform particular operations for the particular application, such as a virtual game.
In an embodiment, one or more APIs may be simple enough that typical end users may be able to customize some aspects of the controls. For instance, in a first-person Windows game, there may be a visual UI joystick overlay on the client device screen which controls the game player's directional and jumping movements. Not all Windows games however use the traditional ‘arrow keys’ or standard (A, W, S, D) keys to control directional movement. If the game has a different keyboard mapping or the user has customized their game environment to use different keys to control directional movement, a simple mapping/configuration tool or mode may be included in the Client application to map which keyboard stroke to send when each of the joystick's directional coordinates are invoked.
For example, consider a Windows game developer who is programming a game that uses four keyboard keys to navigate (A, W, S, D) and space bar to shoot. Because such game may be redirected to a remote tablet/phone with a touch interface, such developer may need to create a screen overlay having A, W, S, D and space-bar keys on the touch device. Such overlay may optimize the experience for the user, such as mapping virtual joysticks to these key interfaces. For instance up may be “W”, down “S”, left “A”, and right “D”.
For example, Splashtop Streamer by Splashtop Inc. may have an API/SDK to allow the game developer to add such a request-to-Splashtop for such screen overlay in accordance with an embodiment. In this example, the Splashtop Streamer notifies the Splashtop Remote client to overlay such keys at the client side.
In an embodiment, such overlay optionally may be done at the Streamer device side. However, it may be that a better user experience can be achieved when the overlay is done at the client side and, thus, both options are provided.
Thus, by such provided APIs at the Streamer or server side, application developers are empowered to render their applications as “remote” or “touch-enabled” when such applications would otherwise been targeted for traditional keyboard/mouse PC/MAC user environments.
It should be appreciated that one or more of the APIs/SDK may be readily found on the Internet or other networked sources, such as but not limited to the following sites:
One or more embodiments for gesture mapping for improving usability of cross-device user interfaces is described with further details hereinbelow.
One or more embodiments may be understood with reference to
An important factor to take into consideration is that the native input mechanism of client device 102 often may not directly correlate to a native action on remote Streamer device 104 and vice-versa. Therefore one or more embodiments herein provide a gesture mapping mechanism between client device 102 and Streamer device 104.
For example, when client device 102 is a touch-screen client tablet that does not have a mouse input device attached, in accordance with an embodiment, a set of defined gestures for tablet 102 may be mapped to a set of mouse movements, clicks, and scrolls on a traditional mouse+keyboard computer, when server 104 is such computer.
Another example is when client 102 is an iPad® client device that uses a 2-finger pinch gesture, which are native to client device 102, to zoom in and out of a view. In an embodiment, such gesture may be mapped to a gesture of an Android Streamer device, which executes the 1-finger double-tap gesture to perform the correlating zoom in and out action, when server 104 is such Android Streamer device.
In addition, an embodiment may be configured such that client gestures that may be handled as native actions on the client device are handled on the client device, instead of being passed through a cmd_channel to the Streamer device to perform an action.
On multi-touch client device 102, an embodiment provides gesture detection mechanisms for both pre-defined gestures as well as the ability to detect custom-defined gestures. For both of these cases, once a gesture is detected, client device 102 hosts one or more gesture handler functions which may directly send actions/events to Streamer 104 via the cmd_channel across network 106, perform a local action on client device 102, perform additional checks to see whether remote session 100 is in a certain specified state before deciding whether to perform/send an action, etc. For example, a custom gesture that is detected on the Client device may have a different mapping on the Streamer device depending on which application is in the foreground. Take the earlier example of 4-finger swipe to the right mapping to ‘go forward’ or ‘next page’. When the Streamer device receives this generic command, it should check whether the active foreground application is a web browser application, and execute the ‘go forward’ keyboard combination (Ctrl+RightArrow). On the other hand, when the Streamer device knows that the active foreground application is a document viewer, the Streamer device may instead execute the ‘next page’ keyboard command (PageDown).
In an embodiment, once client device 102 decides to send an action to Streamer device 104 through the cmd_channel over network 106, a defined packet(s) is sent over network 106 to Streamer 104. Such defined packet(s) translates the action message into low-level commands that are native to Streamer 104. For example, Streamer 104 may be a traditional Windows computer and, thus, may receive a show taskbar packet and translate such packet into keyboard send-input events for “Win+T” key-presses. As another example, when Streamer 104 is a traditional Mac computer, Streamer 104 may likewise translate the same show taskbar packet into keyboard send-input events for “Cmd+Opt+D”.
An embodiment can be understood with reference to TABLE A, a sample gesture map between a multi-touch tablet client device and a remote PC Streamer device.
As well, an embodiment can be understood with reference to
In an embodiment, a plurality of mapping profiles is provided. For example, in an embodiment, a user may configure particular settings and preferences for one or more different mapping profiles that the user may have defined. Thus, depending upon such settings in the user's preferences, the user may be able to select from one of the user's gesture mapping profiles that is defined to allow the user to choose between a) remotely accessing and controlling a Streamer device using the client device's native gesture set and input methods or b) using a client device to access and control a Streamer device using the Streamer device's native gesture set and input methods. For example, client device 102 is an Apple touch-screen phone and Streamer device 104 is an Android touch-screen tablet, the user may select between controlling the remote session by using Apple iPhone's native gesture set or by using the Android tablet's native gesture set.
Another beneficial use case implementation for multiple selectable mapping profiles is application context-specific. For instance, a user may want to use one gesture mapping profile when they are controlling a PowerPoint presentation running on Streamer device 104 from touch-enabled tablet client device 102. In an embodiment, gesture mappings to control the navigation of the slide deck, toggle blanking of the screen to focus attention between the projector display and the speaker, enable any illustrations in the presentation, etc., are provided. An embodiment can be understood with reference to
As another example, when the application being run is a first-person shooter game, then a new and different set of gestures may be desired. For example, for such game, a user may flick from the center of the screen on client 102 in a specified direction and Streamer 104 may be sent the packet(s)/action(s) to move the player in such direction. As a further example, a two-finger drag on the screen of client 102 may be mapped to rotating the player's head/view without affecting the direction of player movement.
Advanced Gesture Mapping in Conjunction with UI Overlays
In an embodiment, multiple selectable mapping profiles may also be used in conjunction with user interface (UI) overlays on client device 102 to provide additional advanced functionality to client-Streamer remote session 100. For example, for a game such as Tetris, perhaps an arrow pad on the client display may be overlaid on remote desktop screen image from the Streamer to avoid needing to bring up the on-screen keyboard on client device 102, which may obstruct half of the desktop view.
In an example of a more complex first-person shooter game, an embodiment provides a more advanced UI overlay defined for enabling player movement, fighting, and game interaction. Thus, for example, when discrete or continuous gestures are detected on client multi-touch device 102, then such gestures may be checked for touch coordinates constrained within the perimeters of the client UI overlay “buttons” or “joysticks” to determine which actions to send across the cmd_channel to Streamer device 104. For instance, there may be an arrow pad UI overlay on the client display which can be moved onto different areas of the screen, so as not to obstruct important areas of the remote desktop image. The client application may detect when a single-finger tap is executed on the client device, then check whether the touch coordinate is contained within part of the arrow pad overlay, and then send the corresponding arrow keypress to the Streamer. When the touch coordinate is not contained within the arrow paid overlay, the client application may send a mouse click to the relevant coordinate of the remote desktop image instead. Because typically first-person shooter games, MMORPGs, etc. have unique control mechanisms and player interfaces, the client UI overlays and associated gesture mapping profiles may be defined specifically for each game according to an embodiment. For example, to achieve a more customizable user environment, an embodiment may provide a programming kit to the user for the user to define his or her own UI element layout and associated gesture mapping to Streamer action.
An embodiment can be understood with reference to
The computer system 1600 includes a processor 1602, a main memory 1604 and a static memory 1606, which communicate with each other via a bus 1608. The computer system 1600 may further include a display unit 1610, for example, a liquid crystal display (LCD) or a cathode ray tube (CRT). The computer system 1600 also includes an alphanumeric input device 1612, for example, a keyboard; a cursor control device 1614, for example, a mouse; a disk drive unit 1616, a signal generation device 1618, for example, a speaker, and a network interface device 1620.
The disk drive unit 1616 includes a machine-readable medium 1624 on which is stored a set of executable instructions, i.e. software, 1626 embodying any one, or all, of the methodologies described herein below. The software 1626 is also shown to reside, completely or at least partially, within the main memory 1604 and/or within the processor 1602. The software 1626 may further be transmitted or received over a network 1628, 1630 by means of a network interface device 1620.
In contrast to the system 1600 discussed above, a different embodiment uses logic circuitry instead of computer-executed instructions to implement processing entities. Depending upon the particular requirements of the application in the areas of speed, expense, tooling costs, and the like, this logic may be implemented by constructing an application-specific integrated circuit (ASIC) having thousands of tiny integrated transistors. Such an ASIC may be implemented with CMOS (complementary metal oxide semiconductor), TTL (transistor-transistor logic), VLSI (very large systems integration), or another suitable construction. Other alternatives include a digital signal processing chip (DSP), discrete circuitry (such as resistors, capacitors, diodes, inductors, and transistors), field programmable gate array (FPGA), programmable logic array (PLA), programmable logic device (PLD), and the like.
It is to be understood that embodiments may be used as or to support software programs or software modules executed upon some form of processing core (such as the CPU of a computer) or otherwise implemented or realized upon or within a system or computer readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine, e.g. a computer. For example, a machine readable medium includes read-only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; electrical, optical, acoustical or other form of propagated signals, for example, carrier waves, infrared signals, digital signals, etc.; or any other type of media suitable for storing or transmitting information.
Although the invention is described herein with reference to the preferred embodiment, one skilled in the art will readily appreciate that other applications may be substituted for those set forth herein without departing from the spirit and scope of the present invention. Accordingly, the invention should only be limited by the Claims included below.
This patent application claims priority from U.S. provisional patent application Ser. No. 61/476,669, Splashtop Applications, filed Apr. 18, 2011, the entirety of which is incorporated herein by this reference thereto.
| Number | Date | Country | |
|---|---|---|---|
| 61476669 | Apr 2011 | US |