This invention relates to a method for operating a computing device and in particular to a method which enables a computing device to run generic software which makes use of input mechanisms and contains menus and user dialogs which are specific to that device.
The term computing device as used herein is to be expansively construed to cover any form of electrical computing device and includes, data recording devices, computers of any type or form, including hand held and personal computers, and communication devices of any form factor, including mobile phones, smart phones, communicators which combine communications, image recording and/or playback, and computing functionality within a single device, and other forms of wireless and wired information devices.
Many computing devices include input mechanisms by which a user can interact with the software controlling the device, either to give command or to input data.
The most traditional of these input devices is the keyboard, which can be characterized by its attribute of having specific keys which are permanently mapped to specific items of data. Keyboards have for many years been extended by the addition of either programmable or dedicated function keys which enable otherwise complex commands or sequences of characters to be executed with a single keypress.
A new form of input via pointing devices rather than keys, popularised by the invention of the mouse in the late 1960s, uses areas of the screen known as controls, which display icons, dialogs or menus with one or more parts that can be clicked on through the use of an on-screen cursor in order to issue commands to the device. This type of screen-based user input can also be used with other types of pointing device in addition to the mouse, such as touch-screens, light pens and joysticks.
User interface software for any computing device includes menus, icons and dialogs that display properly on the screen. In particular, different models of small hand-held devices (such as mobile phones) require different menu structures, text prompts and menus. Usually, these are tailored to the input methods available and to the characteristic of the display; when input is required, the menus need to be appropriate for what is available to a user. A user without a mouse should not be presented with a dialogue requiring a click, and when key presses are required, a user must not be asked to press a key that is not available on that particular device.
There is now a considerable diversity of devices and how to cope with this diversity is something that presents a clear problem for user interface (UI) designers, who have no way of knowing in advance what the parameters of a device might be.
There are a number of possible options which may be used to address this problem:
However, none of the above solutions is ideal, either for software users, software producers, or users in general, principally for the following reasons.
The difficulties identified here are not particularly significant for desktop PCs at the moment because input methods and screens are standardised and have not changed much for approximately fifteen years. But, this is not the case for all computing devices.
In particular, the input characteristics of computing devices in the form of mobile communication devices such as cellular telephones differ from manufacturer to manufacturer, and from model to model from the same manufacturer. A few devices, such as the Blackberry from Research in Motion Ltd, have alphanumeric keypads; some have no keypad at all, but simply a touch screen (such as the Nokia 7700 or the Sony Ericsson P800 and P900 when operated with flip open). Most mobile phones have a numeric keypad as standard, but even then there are a number of extra keys or buttons in addition to the keypad which differ from manufacturer to manufacturer, and the convenience of their placement, is highly variable. There are mobile phones with touchscreens and phones without. Some phones have jog-wheels, some have joysticks, which can be either four-way or eight-way; phones can have both jog-wheels and joysticks. Moreover, designing an optimal screen for control of a black and white screen display means that this probably will not be optimal for a colour screen display. The screen size and resolution, and the pixel size, also differ widely between devices. All these factors affect the way the user interface should be designed.
The most relevant known proposal for addressing this problem is the virtual machine implemented by the Sun Microsystems Inc. Java MIDP (Mobile Information Device Profile) menu system. It is noteworthy that the problem described above is specifically addressed in documents such as http://iava.sun.com/j2me/docs/alt-html/midp-stvle-quide7/midp-char.html which states:
The Java MIDP menu system attempts to solve the problem with its high-level LCDUI (Liquid Crystal Display User Interface) API. This absolves applications from providing their own screen controls, and is also able to map some application commands to device-specific keys. However, the Java virtual machine is essentially an abstraction of an ideal or generic hardware environment and all applications must fit with this virtual machine. For example, the machine only supports a simple menu system with a single pane, which is an inflexible restriction that makes it virtually impossible to create flexible applications with views, dialogs and pop-ups, which need multiple panes.
Such virtual machines are in some respects similar to the solution of providing separate versions of the software for different device families, but without any of the disadvantages, as the burden of providing a separate version for each device is shifted from the software provider to the provider of the virtual machine and is only ever taken once. However, they have the same disadvantage as the solution of designing to the lowest common denominator; the characteristics of the virtual machine become another lowest common denominator because hardware differences are abstracted away, it is not possible for unique hardware features to be used to any advantage.
Thus, there has to date been no satisfactory method of building devices so as to enable a generic software application to provide the best user experience for each device on which it runs.
Therefore, it is an object of the present invention to provide an improved way of adapting generic software in order to maximise the facilities available on a multiplicity of device families.
According to a first aspect of the present invention there is provided a method of operating a computing device including one or more generic applications which have not been specifically written for the device, the user interface for the said one or more generic applications supports one or more views, panes or windows requiring separate input; and the one or more generic applications have no knowledge of the input methods available on the device which they should use for accepting commands; the method comprising including a software entity which does have knowledge of the input methods of the device; and the said software entity provides an application program interface (API) for the said one or more generic applications which enables them to utilise input methods of which they have no knowledge.
According to a second aspect of the present invention there is provided a computing device arranged to operate in accordance with the method of the first aspect
According to a third aspect of the present invention there is provided computer software for causing a computing device to operate in accordance with the method of the first aspect.
An embodiment of the present invention will now be described, by way of further example only, with reference to the accompanying drawings in which:—
In essence, the present invention provides a solution to the problem outlined above by enabling the distribution of application commands to input facilities (such as menu-bar/menu-pane/software or hardware buttons) to be abstracted from an application and handed over to a software entity which is provided by the hardware manufacturer and bound to the device. In the context of the present invention this software entity is referred to as a Command Processing Framework (CPF).
The key difference in the way that a CPF handles input and the way a virtual machine (VM) handles input is that a VM is designed to conceal hardware differences, while a CPF is designed to enable use of them. Thus, the methodologies of the two mechanisms are very different and in strict contrast to each other.
The function of the CPF is described below as it is implemented in the UIQ™ user interface platform from UIQ Technology AB, which is designed to run on the Symbian OS™ operating system, the advanced mobile phone operating system from Symbian Software Ltd. Those skilled in the art of Symbian OS programming using the UIQ interface platform will readily understand this short description; a full tutorial on the programming metaphors used in this operating system are readily available in standard textbooks such as “Symbian OS C++ for Mobile Phones” by Richard Harrison (ISBN 0470856114). Therefore, these metaphors will not be described specifically in this specification. The Command Processing framework is implemented by means of a singleton CPF Manager class (cQcpfManager), which manages all commands in a single application. This is instantiated at application startup. The header file containing the class definition is shown in the specific code examples set out below.
Any CCoeControl that wants its own set of commands must call InitializeResourcesForL. UIQ programs utilising the standard CQikViewBase may achieve this in its ConstructL. When a CommandModelList has been created, the CCoeControl or any of its component controls can add commands to the CPF Manager.
It should be noted that a component control that does not want the parent control's commands to be available when it has focus (i.e. it is the currently active control to which all input is routed) should be on the application user interface control stack and should, therefore, create its own CommandModelList.
CPF Managers do not own any controls, but are able to adopt the standard ones, such as softkey controls, and menu controls. Any CCoeControl that wishes to do so can implement the Mcpfcontrolowner interface and supply a pointer where the CPF Manager can retrieve any additional CPF controls it wants to use.
With the CPF, the application developer describes the commands in the resource file of an application in the standard way as would apply for all system programs. However, in the application's views ConstructL, the view is set up such that the resource definition is handed over to the interface of the CPF Manager. Thus this is the only interface that the application developer needs to interface with, which provides the added benefit that makes it possible for a device manufacturer to replace interaction controls without being worried about binary compatibility.
When a manufacturer creates a device the controls for user interaction, such as softkey controls and menu controls, are defined and configured. When the CPF Manager detects a focus change, the top focused control (which may be a view) is determined, and the corresponding list of commands is retrieved and handed over to the currently active CPF control (which may be a softkey control or a menu bar control). Because there may be more than one control active at a time, controls are prioritised according to an order protocol as determined by the CPF manager. A typical protocol may, for example, determine that high priority controls are given the opportunity to consume commands before these are offered to lower priority controls.
Hence, with the present invention, application developers do not need to worry about the input methods available; for example, whether their application is running on a device with programmable function keys or a touch-screen. Furthermore, device manufacturers or network operators can control the look and feel of any applications that may be loaded on the device, provided that those applications use the CPF.
From the viewpoint of an application developer, the workings of the CPF as are shown in the attached figures.
In summary, the invention makes it possible to develop tailored applications without knowing the input characteristics of the device on which the application will run by providing a method of controlling a computing device in such a way that a generic application, not specifically designed for that device, is nevertheless able to take advantage of those unique input methods that the particular device possesses. The preferred implementation of this invention is on devices such as mobile telephones, which have no fixed paradigm for providing input and whose keyboards (where they exist) have no fixed number of input buttons. In this invention, an intermediate software layer, which is preferably provided by the device manufacturer, processes a list of commands and actions provided by the designer of a generic application, assigns them to various input mechanisms, and constructs appropriate menus to display on the screen. Where the application supports multiple windows, views or panes the intermediate layer is able to distinguish which part of the application has the focus and adjust the actions resulting from user inputs accordingly.
An example of code for carrying out the invention using the UIQ™ user interface and the Symbian OS™ operating system may be as follows.
Although the present invention has been described with reference to particular embodiments, it will be appreciated that modifications may be effected whilst remaining within the scope of the present invention as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
0414842.5 | Jul 2004 | GB | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/GB2005/002605 | 7/1/2005 | WO | 00 | 12/19/2006 |