While the specification concludes with claims defining the features of embodiments of the invention that are regarded as novel, it is believed that the invention will be better understood from a consideration of the following description in conjunction with the figures, in which like reference numerals are carried forward.
Embodiments herein can be implemented in a wide variety of exemplary ways in various devices such as in personal digital assistants, cellular phones, laptop computers, desktop computers, digital video recorder, set-top boxes and the like. Generally speaking, pursuant to these various embodiments, a method or system herein can further extend the concept of user interfaces a skins by encapsulating a chosen primary skin/UI with a secondary (and or tertiary) UI or branding for example. Illustrative of such embodiments can include a ring or ring-tone of a manufacturer's choice within which the primary UI is rendered in a window of a service provider's choice. In another example, an alternative for a smaller scale GUI (e.g., for a mobile device) can be embedded as a graphical icon representing an alternative UI for the smaller scale GUI.
Each UI may have overlapping features (similar to existing skins that mimic behavior in different look and feel schemes) and non-overlapping features (e.g., phone settings can be limited to the device manufacturer UI, or the service provider can have service specific UI information). The user can swap between UIs by selecting the appropriate icon, menu, or haptic control. A user can have the ability to change UI representations as one would traditionally change a channel, or display mode in order to simplify the method of user personalization. In the past when users have been expected to select their profile or “login”, they usually don't bother, but making it simple will increase the likelihood that a user will select something other than a default UI. Since each skin or user interfaces may have different features, unique features between one UI and another may be highlighted when swapping or changing into a secondary UI. This can either be done automatically (on switching) or by selecting a ‘highlight differences’ button/menu item. A user can set defaults such that when certain UI screens to a particular feature, it automatically switches to the preferred GUI for that feature. A user might prefer the service provider's VOD screens, for example, and the device manufacturer's playback screens.
Referring to
The presentation layer 11 illustrates a partition of the user interface functionality into several functional blocks. Each one of these blocks consists of one or more GUI screens. For example,
Referring to
New or additional user interface presentation components can be installed and selected by either the provider or the user. A default Favorites screen 40, for example, as illustrated in
Each screen represents a certain core piece of functionality, with various transitions between that screen and other screens. Each screen can be represented as a functional component that upon installation registers with the system one or more of the following: interaction functionality, the Look and Feel scheme, UI calls, and transitions with other functional components. As a set, each component of the Look and Feel scheme may create an entire or a partial GUI. Each function can be ‘overloaded’, such that a given function or screen can be represented by more than one look and feel. A given component may also represent additional functionality, providing additional transitions to new features.
To provide a framework for components, and to provide some consistency with the user, one particular embodiment can restrict downloadable components to match a required set of transitions and/or functionalities. In an alternative embodiment, only certified components may be installed in the system.
A registration manager can be made responsible for tracking installed UI components. In one embodiment, a single UI component for each functionality is set as active. When a transition occurs between one UI function and another, the registration manager can indicate which component is instantiated next. If the user selects a different component for that function, the new component will be registered as default. In another embodiment, a different set of components will be selected for each user of the system. In another embodiment, a different set of components can be selected based on the room or location (using GPS or IP addressing for example) or based on the device on which the UI is displayed.
A user can select to change to a completely new look and feel, or to a different look and feel for one or more components. As described above, this also allows a new look and feel to be downloaded by the service provider, while the old UI components still exist in the background.
Referring to
Referring to
Referring to
At decision step 85, if the component was downloaded by the user or an operator with the intention that the new component would be the default UI, this component is set or tagged as the default at step 86 and displayed at step 86 the next time a transition is made to that function. If the component was not downloaded to be a new default at decision step 85, then step 86 is skipped. In either case, in embodiments where optional components are displayed as icons or in some other means, the icon list is updated, and where appropriate a new icon is displayed representing the new component.
Unlike existing skins and themes where the complete user interface is replaced with a new User Interface, embodiments herein allows the user to keep the user interface which the user prefers or is used to and enables the replacement of the user interface or user interface components that the user would like to remove. Hence bringing in a better user experience and flexibility. Note, each function can be ‘overloaded’, such that a given function or screen can be represented by more than one look and feel. A given component may also represent additional functionality, providing additional transitions to new features certain interactive features.
Referring to
The application layer 91 has the clean separation of the Behavior and the Presentation specifications. That means the application behavior can be changed separately from the presentation specifications and vice versa. This is very important aspect of this framework for enabling the sharing of the user experience and the ability to change the user experience dynamically (based on environmentally driven policies).
The Interaction management layer 92 is responsible for generating and updating the presentation by processing user inputs and possibly other external knowledge sources (for example a Learning engine or Context Manager) to determine the intent of the user.
The Modality Interface Layer 93 provides an interface between semantic representations of input/output (I/O) processed by the Interaction Management layer 92 and modality specifics of I/O content representations processed by the Engine Layer 94.
The Engine layer 94 performs output processing by converting the information from the styling component (in the Interaction Management Layer 92) into a format that is easily understood by the user. For example, a graphics engine displays a vector of points as a curved line, and a speech synthesis system converts text into synthesized voice. For input processing, the engine layer 94 captures natural input from the user and translates the input into a form useful for later processing. The engine layer 94 can include a rule based learning engine and context aware engine. The engine layer 94 can provide outputs to the hardware layer 95 and can receive inputs from the hardware layer 95.
The Device Functionality layer 96 interfaces with the device specific services such as CDMA stack, Database etc. Such architecture can have a clean separation of the device functionality from the application and enable cleanly structured application data independent of device functionality.
The machine may comprise a server computer, a client user computer, a personal computer (PC), a tablet PC, personal digital assistant, a cellular phone, a laptop computer, a desktop computer, a control system, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine, not to mention a mobile server. It will be understood that a device of the present disclosure includes broadly any electronic device that provides voice, video or data communication. Further, while a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.
The computer system 600 can include a controller or processor 602 (e.g., a central processing unit (CPU), a graphics processing unit (GPU, or both), a main memory 604 and a static memory 606, which communicate with each other via a bus 608. The computer system 600 may further include a presentation device such as a video display unit 610 (e.g., a liquid crystal display (LCD), a flat panel, a solid state display, or a cathode ray tube (CRT)). The computer system 600 may include an input device 612 (e.g., a keyboard), a cursor control device 614 (e.g., a mouse), a disk drive unit 616, a signal generation device 618 (e.g., a speaker or remote control that can also serve as a presentation device) and a network interface device 620. Of course, in the embodiments disclosed, many of these items are optional.
The disk drive unit 616 may include a machine-readable medium 622 on which is stored one or more sets of instructions (e.g., software 624) embodying any one or more of the methodologies or functions described herein, including those methods illustrated above. The instructions 624 may also reside, completely or at least partially, within the main memory 604, the static memory 606, and/or within the processor 602 during execution thereof by the computer system 600. The main memory 604 and the processor 602 also may constitute machine-readable media.
Dedicated hardware implementations including, but not limited to, application specific integrated circuits, programmable logic arrays and other hardware devices can likewise be constructed to implement the methods described herein. Applications that may include the apparatus and systems of various embodiments broadly include a variety of electronic and computer systems. Some embodiments implement functions in two or more specific interconnected hardware modules or devices with related control and data signals communicated between and through the modules, or as portions of an application-specific integrated circuit. Thus, the example system is applicable to software, firmware, and hardware implementations.
In accordance with various embodiments of the present invention, the methods described herein are intended for operation as software programs running on a computer processor. Furthermore, software implementations can include, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
The present disclosure contemplates a machine readable medium containing instructions 624, or that which receives and executes instructions 624 from a propagated signal so that a device connected to a network environment 626 can send or receive voice, video or data, and to communicate over the network 626 using the instructions 624. The instructions 624 may further be transmitted or received over a network 626 via the network interface device 620.
While the machine-readable medium 622 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “machine-readable medium” shall also be taken to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by the machine and that cause the machine to perform any one or more of the methodologies of the present disclosure. The terms “program,” “software application,” and the like as used herein, are defined as a sequence of instructions designed for execution on a computer system. A program, computer program, or software application may include a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.
In light of the foregoing description, it should be recognized that embodiments in accordance with the present invention can be realized in hardware, software, or a combination of hardware and software. A network or system according to the present invention can be realized in a centralized fashion in one computer system or processor, or in a distributed fashion where different elements are spread across several interconnected computer systems or processors (such as a microprocessor and a DSP). Any kind of computer system, or other apparatus adapted for carrying out the functions described herein, is suited. A typical combination of hardware and software could be a general purpose computer system with a computer program that, when being loaded and executed, controls the computer system such that it carries out the functions described herein.
In light of the foregoing description, it should also be recognized that embodiments in accordance with the present invention can be realized in numerous configurations contemplated to be within the scope and spirit of the claims. Additionally, the description above is intended by way of example only and is not intended to limit the present invention in any way, except as set forth in the following claims.