Computers use graphics, animation, sounds, and the like to provide information to a user. Conventional utilities for providing such information often require learning a complex computer language and hard-coding programs for a target system for presenting the information to users. Similar problems exist when a target system is programmed to receive various responses from the users. Synchronization of multiple resources is also difficult when attempting to synchronize utilities for sound, graphics, and animation using the conventional utilities.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended as an aid in determining the scope of the claimed subject matter.
The present disclosure is directed to a user interface for incorporating video contents such that the video contents can be manipulated to accommodate user-interface (UI) navigation. The video contents can be made to operate with a navigation structure so that UI customization can be enabled by using the same (or similar) navigation structure with different video content. For example, video contents (which can comprise static or animated media) can be easily combined with functionality by a service provider to create a customized video menu. Providing customizable menus enable the service providers (who might otherwise not be programmers) to provide a compelling experience for the users of the customized menus.
Two major challenges in UI design are to create a “rich and beautiful” user experience, and making the UI customizable by service providers (such as kiosk vendors) that are targeted for different users in various contexts. A video user interface is disclosed that incorporates video contents into a user interface, manipulates selected video contents to accommodate UI navigation, and makes the video contents interchangeable so as to enable UI customization.
These and other features and advantages will be apparent from a reading of the following detailed description and a review of the associated drawings. It is to be understood that both the foregoing general description and the following detailed description are explanatory only and are not restrictive. Among other things, the various embodiments described herein may be embodied as methods, devices, or a combination thereof. Likewise, the various embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. The disclosure herein is, therefore, not to be taken in a limiting sense.
As briefly described above, embodiments are directed to dynamic computation of identity-based attributes. With reference to
Computing device 100 may have additional features or functionality. For example, computing device 100 may also include additional data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Such additional storage is illustrated in
Computing device 100 also contains communication connections 116 that allow the device to communicate with other computing devices 118, such as over a network. Networks include local area networks and wide area networks, as well as other large scale networks including, but not limited to, intranets and extranets. Communication connection 116 is one example of communication media. Communication media may typically be embodied by computer readable instructions, data structures, program modules, or other data in a modulated data signal, such as a carrier wave or other transport mechanism, and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. The term computer readable media as used herein includes both storage media and communication media.
In accordance with the discussion above, computing device 100, system memory 104, processor 102, and related peripherals can be used with video user interface 120. Video user interface 120 in an embodiment can be used to allow service providers to create customized video user interfaces (described below with reference to
A keyframe can be used to establish a link between time-based media and an abstract collection of items to be used to provide functionality to the menu. Keyframes can be distributed at predetermined locations (“predefined structure”) and/or by marking selected frames (“tagging”). Keyframes may or may not be uniformly distributed across the video. Thus keyframes 210, 220, 230, and 240 in accordance with a predefined frame structure can be predefined to be at frame #5, #21, #31 and #41, respectively, and/or each frame can be marked individually by tags according to video contents. The video user interface can apply to any standard or special video codec (for JPG sequences, MPG sequences, WMV sequences, WAV files, MIDI files, and the like) with or without metadata embedded.
In operation, a service provider uses the video user interface to select media resources and provide functionality to be selected by a user. The user offered the menu, via a kiosk, for example, can select menu items to cause actions to be performed, such as purchases, downloading, and navigation through the menu structure.
When the user scrolls through selectable items in a video user interface menu, the video content is played backward or forward to the targeted keyframe, which then corresponds to the most recently selected item. As a result, smooth animated transitions (from a previously selected keyframe to a most recently selected keyframe) occur when the selection changes. Using a control (such as by selecting a menu item), the video can be, for example, played forward or backwards, at variable speed (or speeds) to a targeted menu-item frame.
Frame collection 400 comprises frames demarcated at keyframe boundaries (“video segments”). For example, segment 410 can be used to provide a “splash screen” introduction to the menu when the menu is first activated. Segments 420 can be used when navigating “up” as in a tree of menu selection items. Segments 430 can be used as loop segments (which allow the menu display to increase user interest through animated effects, for example). Segments 440 are down frames that can be used when navigating “down” as in a tree of menu selection items. As discussed above, audio can be sequenced in conjunction with the user's navigation of the menu.
Because of the flexibility offered by video, video user interface menu layouts are not to be constrained to conventional vertical-list formats. In contrast, service providers can use the video user interface menu layouts to provide a broad range of creative treatments such as three-dimension layouts and/or special effects like water ripples or fog and smoke. Additionally, a render engine for the user interface can be used to provide the special dynamic capabilities on texts, shapes, and static user interface elements, manipulating and synchronizing them to the underlying video user interface. The dynamic capabilities comprise functions such as scale, move, rotate, fade, color, and the like.
For example, text layers 610, 620, 630 can be identical or slightly modified to be substantially similar from the programmer's point of view. Text layer 610 can be associated with video 640, which is different from video 650. Text layer 620 can be associated with video 650, which is different from video 660. Text layer 630 can be associated with video 660. Thus the effort used to make menu structures (such as text layers) can be used and reused efficiently to make a variety of menus that appear to be different, but yet retain a user interface that is learned and becomes familiar to groups of targeted users.
Linking a common (or similar) underlying structure to different videos facilitates the process of making customized video user interfaces. One example is personalizing menu content such as making video user interfaces for specific people. Another example is generating dynamic content in response to various contexts and locations such as loading a new UI through a wireless network. Additionally, promotional and advertisement-based user interfaces can be quickly ported to time-sensitive product such as a new movie or music video. Further, menu items can be linked to a time clock to provide, for example, morning, noon, afternoon, and evening product offerings.
Various applications for the video user interface may include a platform for advertisement-based contents (such as movies, downloadable audio content, soft-drink products, and/or other time-sensitive product sales. The video user interface also enables selling and sharing video user interface clips, through linking websites and/or wireless services. Custom branded video user interfaces can be easily created and updated by, for example, custom branding menus on corporate mobile phones to provide corporate branding and standardized functionality to employees. The video user interface also provides another medium through which artists and designers can create artistic and expressive interfaces and personalized narratives of arbitrary media content.
Tools can be provided to facilitate service providers in creative (as well as functional) processes of making video user interfaces. The reuse of components in a tool context can allow relative novices to create professional quality presentations.
The tools for making the video user interfaces can be organized as stand-alone tools, plug-ins, or incorporated into hardware products. For example stand-alone tools can be used to make user-navigable video and audio clips. The commands of the stand-alone tools can be configured specifically for making video user interfaces, which allows the user interface to be constrained, and thus easier for users to learn and use.
Plug-ins can be provided for standard video/audio editing tools to incorporate video user interface creation and playback functionality in the existing tools. The users, who are familiar with the transport and editing controls of the existing tools, can readily assimilate the controls of the plug-in (which can be constrained to video user interface functionality).
The video user interface functionality can be incorporated into hardware products. For example, a video camera can be equipped with special with special editing software on the device that can be used to create customized navigable videos. Additionally an electronic kiosk can also include the video user interface software, such that a service provider (who presumably knows the needs of the consumer) can generate a customized video user interface at the point-of-sale.
The above specification, examples and data provide a complete description of the manufacture and use of embodiments of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.