Rendering of an Interactive Lean-Backward User Interface on a Television

Abstract
Embodiments of the invention relate to user interfaces and systems and methods for generating a real-time “lean-back” user interface for use with a television or other display device and for reuse of encoded elements for forming a video frame of the user interface. An interactive session is established between a client device associated with a user's television and the platform for creating the user interface over a communication network, such as a cable television network. The user interface is automatically generated by the platform and is animated even without interactions by the user with an input device. The user interface includes a plurality of interactive animated assets. The animated assets are capable of changing over time (e.g. different images, full-motion video) and are also capable of being animated so as to change screen position, rotate, move etc. over time. A hash is maintained of cached encoded assets and cached elements that may be reused within a user session and between user sessions.
Description
TECHNICAL FIELD

The present invention relates to user interfaces and systems for creating user interfaces, and more particularly to a user interface and system for creation in real-time with personalized interactive content.


BACKGROUND ART

When content is interacted with in a computer environment, the user interacts with a “lean forward” user interface. Such interfaces require the user to actively participate through keystrokes and mouse clicks. The user interface of interactive television services often consists of ‘menu items’ from which a user actively selects one (by navigating using up/down/left/right buttons, and then pressing ‘ok’ or a similar button to confirm his choice). The menu items are often nicely listed in a static 2D layout. The menu items typically lead, after a number of menus and selections, to trailers, synopsis data, video assets (Video on Demand) or a linear broadcast. This can be characterized as a ‘lean forward’ user experience, because the user is requested to actively make a choice before anything else happens. Often multiple user interactions are required before a preview of a content item is shown. If the user does not do anything, the screen remains the same except maybe some animated user interface elements on the screen.


When content is displayed in a television environment, the user generally wishes to have a “lean-back” experience with little interaction with a controller, but still desires to be entertained. This is even true of interactive content that is distributed through communication networks, such as cable television systems. The “lean forward” web browser, word processor, and other applications do not translate well to a television viewing experience. Thus, there is a need for a “lean-back” user interface that can be rendered in real-time and that provides for individualized user content.


It is known in the prior art, in computer-based systems to employ tiles that when moved over with a cursor provide a presentation view of the content. Currently, the presentation view is for static content (e.g. word processing documents, spreadsheets). The current version of Microsoft's Windows product includes such functionality.


Additionally, a company called Animoto provides for an animation rendering web service. Animoto accepts a set of movies and still pictures as assets, and renders an animation to present these assets. However, the end user cannot interact with the renderings.


SUMMARY OF THE EMBODIMENTS

Embodiments of the invention relate to user interfaces and systems and methods for generating a “lean-back” user interface for use with a television or other display device.


First, an interactive session is established between a client device associated with a user's television and the platform for creating the user interface over a communication network, such as a cable television network. The user interface is automatically generated by the platform and is animated even without interactions by the user with an input device. The user interface includes a plurality of interactive animated assets. The animated assets are capable of changing over time (e.g. different images, full-motion video) and are also capable of being animated so as to change screen position, rotate, move etc. over time. The interactive animated assets have an associated state. For example, an asset may be active or inactive.


After an interactive session has been established, the platform identifies a TV application to execute. The TV application determines a plurality of interactive animated assets to present to the user based upon a user profile. The user profile may indicate preferences of the user or may have historical information about the user's past actions (e.g. movie selections, television shows viewed etc.).


The TV application determines the state of each of the assets and determines the tiles that need to be generated. One tile can contain zero or more assets. To generate each tile, the application generates a tile creation request. The tile creation request includes animation scripting information or a reference thereto. The one or more tile creation requests are executed by the platform and result each in one MPEG fragment.


The platform stitches each video frame based on the TV application output with the plurality of interactive animated assets to form a sequence of encoded video frames. In certain embodiments of the invention, each tile creation request creates a hash value for the tile creation request and the hash value is added to a database of hash values that are stored in cached memory. Thus, prior to the execution of a tile creation request, the tile creation request is hashed and a comparison with the database of hash values is performed. If there is a match of the hash value, the cached tile asset in cache memory is retrieved and passed along so that the MPEG fragment can be stitched into a video frame. If the hash value does not match, the asset/tile is retrieved and processed prior to the MPEG fragment being output for stitching with other MPEG fragments to form a video frame.


The platform transmits the sequence of encoded video frames to the client device associated with the user.


The interactive animated asset may include a graphical component which is a tile that is smaller than an entire video frame. In an active state, the graphical component is represented as a full motion video (e.g. a program preview) and in an inactive state the graphical component may be a still image.


The scripting information associated with the tile creation request may cause the encoding or transcoding of an interactive animated asset. For example, the graphical component may be stored as a full-screen preview and the script will indicate the size of the tile for the graphical component of the user interface causing the graphical component to be resized to fit within the tile. In order to facilitate real-time creation and updating of the user interface, the assets may be pre-encoded. The assets may be formed in an encoded stitchable format so that the assets can be combined in the encoded domain. As a result, all of the assets need not be decoded, rendered and re-encoded in order to produce a sequence of streaming video frames. In an embodiment of the invention, the assets are pre-encoded as macroblock encoded elements for stitching in the encoded domain.


The TV application causes an asset to be represented as an active asset by some indicia (e.g. border, different color, brighter color etc.). The TV application will then periodically switch between assets making an inactive asset “active” and the active asset “inactive”. A user viewing the user interface may interact with the user interface to control the script. For example, the user may slow down or speed up how frequently an asset becomes active. Additionally, a user may use an input device to interrupt the script and cause an asset to become “active”. Once an asset is active, the user may interact with the asset. For example, the user may select the asset and cause the asset to be displayed as a full-motion full-screen preview as opposed to a full-motion tiled preview. The user may also cause the interactive asset to be stopped slowed down or sped up. This interactivity is limited to active assets. Thus, a user cannot interact with a tile to change the speed or stop the tile asset.


The platform may include a stitching module. The stitching module will stitch together the selected assets based upon a recommendation engine along with a selected graphical portion of a TV application output to form a video frame. The video frames will be updated and output on a regular basis to form an encoded video stream that is directed to the client device of the user for decoding and display. The stitched content may represent an encoded video frame in an encoded video frame sequence, such as an MPEG transport stream.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing features of embodiments will be more readily understood by reference to the following detailed description, taken with reference to the accompanying drawings, in which:



FIG. 1 is a schematic block diagram of a system for generating a real-time user interface that may be personalized for transmission of a compressed video stream through a communication network to a display device associated with a user;



FIG. 1A is a schematic representation of system components for generating a real-time user interface including a list of stored hashes for MPEG fragment reuse;



FIG. 2 is a flow chart of one embodiment of a method to create a real-time user interface;



FIG. 3 is a first representative lean-back user interface; and



FIG. 4 is a second representative lean-back user interface.





DETAILED DESCRIPTION OF SPECIFIC EMBODIMENTS

Definitions. As used in this description and the accompanying claims, the following terms shall have the meanings indicated, unless the context otherwise requires:


The term “animation” shall imply movement of an object such as an “asset” on a display device. The movement may be either within an asset (e.g. a full-motion video preview of source content) or the movement may be animation of the asset as a whole (e.g. a movie poster that is rotated, inverted, cut in smaller pieces that move apart and then re-joins, bounces etc. changing at least in part the addressable location on a display device).


The term “TV application” shall refer to a series of computer instructions and stored states that prescribe how the interactive TV application responds to stimuli such as keystrokes.


The term “asset” shall refer to a preview of a movie, TV program, photograph etc.; it can be a movie still or a full motion video or a thumbnail. Assets are associated with the underlying source content (complete movie, complete TV program, full-size photograph etc.).


the term “tile creation request” shall refer to a request issued by a TV application to a Tile Factory. The request includes references to source assets and/or previews thereof. The request may also include animation scripting information or a reference there to.


The term “tile” shall refer to one or more assets that are animated as specified in the tile creation request's animation script. The size of a tile is typically determined by the screen area required to render the animated asset(s). The animation can, for example, let the asset start as a small dot and progressively zoom in until it fills the area allocated for the tile. Each tile has associated functionality as defined by the TV application. For example, if a user selects a tile using an “input device” and the tile is in an “active” state, the user interface will change in response. The user interface may change to show the asset as a sequence of full video frames (e.g. a movie preview that fills the entire video display) or the user interface may include additional queries for the user, such as “do you wish to order this program?” Thus, tiles have associated “states”. A tile may be an “active” tile wherein the tile can be accessed by the user using an input device. The tile may be an “inactive” tile displaying a still image, rotating set of images or an animation wherein the user cannot immediately interact with the tile. Once a tile is made “active”, a user can then cause changes to the user interface and control the asset. For example an active tile may be presenting a full-motion preview of the asset within the tile. The user can then cause the preview to be displayed on the full screen. In addition, the user may be given control over the asset. The user may cause the asset to fast-forward, rewind, play or stop for example.


The term “TV application” is directed to a computer program that automatically causes a first tile to be designated as the “active” tile. Upon activation as the active tile, the tile may change its graphical representation from a still image to a full-motion video (e.g. a preview). The full motion video will generally be a reduced/scaled version of a preview. The TV application may be written so as to automatically switch between tiles by some triggering event or at a fixed or variable time. It can do so at any time, regardless of whether an animation has finished running or not. For example, tiles may be made active at 10 second intervals or upon completion of the preview (full motion video). When the TV application changes the “active” tile, the audio that is played may be changed correspondingly, where the audio of the now no longer active tile is muted. A tile may have video, (audible or muted) audio, textual descriptions, or a combination thereof. The TV application may specify to change the layout of the user interface over time, for example it may include an animation which removes one tile and re-arranges the remaining tiles. The tile that is removed may shrink until it disappears for example. The TV application will setup a tile creation request referring to the asset and an animation that reduces the size of the asset in time.


The “Tile Factory” will then render and compress the animation. The Tile Factory is a component of the platform for creating the user interface in real-time or parts thereof, and providing the user-interface as a compressed stream to a stitcher component, or directly to a decoder (“client device). The client device outputs the user interface to a display device. The Tile Factory component and the platform are explained in further detail below. In the present application, the term “real-time” refers to less than two seconds between a user's key press a change on the display device of the user as a result of the keypress.


The TV application may also have interactive functionality. For example, a user may control the rate of change between assets by pressing a ‘fast forward’ keystroke. As a result, the TV application reduces the playing time of each tile, and increases the pace of presenting tiles. The TV application may also reference user recommended assets and the asset's underlying source content. A tile will have associated graphical content (assets), a state, and functionality associated with the tile. Conceptually these are all part of the TV application. An asset may be associated with a movie, television program or may represent a list of programs (music, drama, mystery etc.) The tile will include a graphical image, series of graphical images or animation associated with an inactive state and a full-motion video sequence associated with an active state. The video sequence, which is full-motion may be a preview of a movie, video, game or television show. The underlying source content or source content shall refer to a full-screen version of a complete movie, complete television show or other entertainment form.


An “animation script” prescribes how certain input graphical or audio objects are modified over time. It also includes information on the background and other context of the graphical objects, for example to indicate that the background is filled with a colour or a texture.


The term “user” may refer to an individual (“John Doe”) or may refer to a household (the “Smith” family) or other group that has access to a display device and accesses the lean-back user interface.


The term “encoded” shall refer to the process of compressing digital data and formatting the resultant data into a protocol form. For example, encoded data may be encoded using an MPEG protocol such as MPEG-1, MPEG-2, or MPEG-4 for example. Each of the MPEG standards specifies a compression mechanism and encapsulation in a format for transmission and decoding by a compatible decoder.


The term “object” shall refer to an encoded/unencoded asset or an encoded/unencoded TV application screen element or a combination thereof



FIGS. 1 and 1A show a platform for generation of a user interface. The user interface is transmitted across a communication network (e.g. Internet, cable television system etc.) to a display device of a user. The user interface is transmitted from the platform as a compressed video stream. In a preferred embodiment, the user interface is transmitted as a compressed stream wherein the user interface is personalized based upon information known about and collected from the user. All of the processing in the creation of the user interface is performed remote from the user's location by one or more remote processors. For example, the processing may be performed at a cable television head-end or at a location removed from a cable-television head-end i.e. a location in communication with the cable television head-end via a secondary network, such as the Internet. The user interacts with the remote processor using the display device itself (e.g. tablet PC) or a controller such as infrared remote control that is used to control the display device. The display device relays the controller instructions to the remote processor.


In the contemplated environment, the transmitted compressed video stream is received by a client device. The client device may be a set-top box, a television, tablet PC, or other device that is capable of decompressing the compressed unitary stream. For example, if the compressed unitary stream is encoded as an MPEG transport stream, the client device would include an MPEG decoder. In such a configuration, the client device needs only a small piece of client software to connect to the remote processor, transmit keys, and receive the user interface as MPEG audio/video stream (‘streaming user interface’). It should be understood by one of ordinary skill in the art that the present invention is not limited to MPEG encoding and that other encoding schemes may also be employed without deviating from the intent of the invention.


In the present lean back user interface, ‘menu items’ are presented in the form of short trailers, animations, or stills ('previews') that are played out as tiles. Multiple tiles are presented consecutively and may be visible simultaneously (See for example FIGS. 3 and 4). The active asset (one of the tiles) launches without active user participation for a relatively short time—typically a few seconds or so, sufficient for the user to decide if he or she wants to explore the item some more (e.g. view a trailer or synopsis data). The tiles may be presented in a 3D representation and move around on the screen. Thus, each asset may be repositioned to a different addressable location on the screen. The asset may be represented in a three dimensional space with coordinates that are translated to the two-dimensional display space. The assets may be repositioned by a script (animation script) that becomes active when the user interface is presented on the display device of the user. Even if the user does not use an input device to select or interact with an asset, there are always visually attractive elements to draw the attention of the user that are continuously being refreshed. If the TV application decides that a tile should change from ‘active’ state to ‘inactive’ state, for example based on user input, it will stop the playout of the tile and/or replace it with the inactive representation of the tile. Thus, the TV application and the animation script associated with the tile creation request causes continuous and automatic updating of the assets displayed on the display device of the user.


The system as shown in FIG. 1 allows for real-time processing across a non-quality of service communication network, such as the Internet. Similarly, the system can be used with cable-television networks, satellite networks, and other networks in combination with the Internet in which there are data delays due to network bandwidth congestion. Real-time processing is achieved by caching reusable assets that may be applied across many different user interfaces or repeatedly used within a single user's interactive session. Additionally, all or portions of the objects that the TV application refers to may be cached. The reusable objects may be stored in a pre-compressed format, so that the encoded objects (assets and TV application screen elements) can be stitched together in the encoded domain rather than requiring that the objects are first spatially rendered, composited, and then compressed. In one embodiment, the objects may be stitchable MPEG macroblocks that have already been encoded using a frequency based and/or temporal transformation. An example of such an architecture for using stitchable content can be found in U.S. patent application Ser. No. 12/008,697 entitled “Interactive Encoded Content System including Object Models for Viewing on a Remote Device”, which is incorporated herein by reference in its entirety. Because movement of the assets (direction and speed) is determined by an animation script associated with the TV application and the full-motion content of the video asset is known, the remote processor can determine which screen areas remain unmodified, which areas are unmodified, but include motion, (have associated motion vectors) and which screen elements need to be completely rendered and compressed in real-time. The platform components of FIG. 1 may use stitchable MPEG elements (transform encoded elements) or may use other elements that can be stitched together to form a compressed video stream.


As shown in FIG. 1 an Application Engine module 103 may operate to centrally control the creation of a lean-back user interface. It should be understood by one of ordinary skill in the art that each of the modules shown in FIG. 1 may be implemented as separate processors or integrated circuits, such as an ASIC, and may also include corresponding application code or the modules may be combined together and operate on a single processor. For example, the tile factory module and the streaming engine module may be combined together and the computer code for both the streaming engine module and the tile factory module may be implemented on a single processor. Similarly, the modules may be understood to represent functional computer code blocks. These computer code blocks can be implemented on a tangible computer readable medium and used in conjunction with one or more processors for creation of the lean-back user interface.


After an interactive session is established (i.e. through exchange of identification and addressing data) between a user using a controller 100 through a client device 101 coupled to a display device (e.g. a television) 102, the application engine 103 becomes active and the TV application is launched. Based upon the identity of the user, the recommendation engine 104 accesses a user profile 105. The user profile 105 maintains information about the user. For example, the user profile may include user preferences (favorite genres, actors, etc.) as well as containing historical information about the user. The historical information may include account information, including previously selected interactive content, and accessible content (e.g. does the user subscribe to a particular service?). Based upon a predetermined algorithm using the user's profile, the recommendation engine 104 accesses a database of available assets and provides the recommendations 110 to the application engine 103. One example of a recommendation engine is a product entitled Aprico. The product includes a set of algorithms that use different sources including user “clicks”, time spent reviewing an asset, each time a user clicks an asset, or actual consumption of the source content in order to determine a recommendation (content to be displayed). The Aprico program maintains a database compiled of such personalized data for a user profile. Metadata associated with an asset can be used by the recommendation engine to determine if a user has a preference for an actor, director, genre or other property.


In addition to the recommendations provided by the recommendation engine, the application engine also receives metadata 106 from a metadata provider 107 along with metadata enrichment information 108 from a metadata enrichment module 109 and associated database. The metadata provider 107 may obtain and process information regarding a cable television's broadcast schedule, on demand offerings, and interactive features (such as games) and store the data in an associated database. For example, the metadata may include the title of the program/show/movie, the length of time of the source content, available viewing times for the source content if the source content is broadcast, principal actors, and a synopsis of the source content. The metadata provider 107 will then provide the metadata information 106 to the application engine 103 when requested.


The metadata enrichment module 109 adds metadata that is not traditionally present in current metadata databases but which can further personalize the ActiveTile asset (i.e. the tile and the associated asset). An example is selection of a movie that contains scenes where a favorite actor is appearing. The metadata enrichment module 109 may receive the metadata from the metadata provider 107 and associate enrichment data with the metadata. For example, the metadata enrichment module 109 may contain scene information for assets along with performers in the source content. Thus, a personalized experience can be created by the application engine 103. An ActiveTile asset can be created within the application engine for a preferred performer. Thus, all of the available programs/shows/movies (source content) that include the preferred performer may be accessible through this personalized ActiveTile asset that can be added to the lean-back user interface created within the application engine 103. The application engine 103 can use the additional metadata 111 to select a further personalized video trailer for the asset, for example by having the asset show a preview of the movie where the preferred performer is appearing when the asset is active.


The application engine 103, while executing the TV application, prepares one or more tile creation requests for the tile factory 120, specifying source assets and animation information. The assets are stored in a data store that may be part of or external to the platform, but which is in communication with the tile factory. The asset may include reference to trailers, posters, photographs, and synopsis information obtained from the metadata. The tile factory may access external data stores for accessing the graphical content associated with an asset. As shown, the tile factory would access full motion video content from a trailer repository 125 and access still image information from a poster repository 130 in response to a received command to create a tile that includes an asset. The asset may include tile content from third party tile content providers 140 such as YouTube or Flickr. The application engine, while executing the TV application, selects the assets to include in the command based on recommendations from the Recommendation Engine. In one embodiment of the invention, the command sent to the tile factory is formatted as an HTTP POST request to a URL, where the body of the request carries the command encoded in JSON.


The application engine passes the command URL to the Streaming Engine. The Streaming Engine issues the command to the tile factory which then accesses the animation information and the requested assets. The tile factory may perform a number of graphical processing functions on the graphical elements of the asset such as transcoding and resizing of the asset to fit within the TV application's screen layout at an addressable location. It may perform more advanced processing functions such as shading, slanting, or positioning the asset in a 3D coordinate system and then computing a 2D projection, or moving the asset in time along a 2D or 3D trajectory that is specified in the animation script. In certain embodiments, the tile factory may include elements for providing indicia of the asset being active. For example, the indicia may be a border that surrounds the addressable location of the asset when displayed. In certain embodiments, elements of an asset may be pre-encoded as MPEG fragments (macroblocks that have been transform encoded). After having rendered and compressed the animated asset, the tile factory passes the encoded representation of the asset to the streaming engine. The streaming engine stitches together, preferably in the encoded domain each of the graphical elements of the assets and the graphical elements of the TV application screen. The Streaming Engine generates a real-time compressed audio/video stream that is transmitted to the client device (over any suitable network infrastructure) which is then decoded (decompressed) by the client device and displayed on the user's display device. In a preferred embodiment, the tile factory uses information from an animation script associated with the tile creation request to efficiently encode MPEG. The tile factory may use information on speed and direction of the movements of animated assets as input for the motion prediction in an MPEG encoder, so as to create stitchable elements for the streaming engine.


The tile factory executes a tile creation request that references assets and an animation script, and renders it into MPEG. When the user interacts with the lean-back UI Application, the generation of the UI stream is affected. For example, the user can press a ‘fast forward’ button which increases the speed at which tiles are being presented to the user (where the playout of the tile content (e.g. movie trailers) stays at normal speed). The user can also navigate through the tiles if multiple tiles are shown on the display device. The selected tile is highlighted and put into an active state and may exhibit other audio/visual behavior than other ActiveTiles to attract attention. For example, the audio of the active asset is selected (others are muted), or the active asset is highlighted, surrounded with a border or made more colourful; or in general any other visual styling that attracts more attention than another visual styling.


An asset can include audio plus video, or only video, or only audio. An asset that has only audio can be understood as a ‘voice over’ that gives a vocal announcement of the tiles that are available for the user. The audio can be a pre-recorded human voice or computer-generated. The tile factory allows pre-encoding of certain asset elements prior to being used, as well as real-time (on-demand) rendering of animations. This allows the UI Application to dynamically adapt to personalized recommendation inputs while avoiding unnecessary re-rendering of previously rendered animations.


The tile creation request which is used as a command to the Tile Factory may in certain embodiments include:

    • source movie or poster file
    • path/address to an animation script, which uses a specification language such as Blender's
    • startframe and endframe of the animation to render
    • text to use in the animation
    • graphical and audio context of the tile, such as screen background and background audio


The Tile Factory includes a cache and logic to store results for consecutive requests. The tile factory calculates a hash over all animation parameters including source asset (by location, by file or location properties, or by the source asset hash) to detect whether the tile factory has rendered an animation for an earlier session, and if so, retrieves the animation from the cache.


Third party assets can be included in the user interface. A special type of third party ActiveTile asset is the user's own locally stored content, such as home movies and digitized DVDs. This third party asset may be stored in a database within or external to the platform. The database is in communication with the application engine and also with the tile factory. Thus, the application engine can specify a third party asset and communicate the information through the platform to the tile factory. The tile factory can then retrieve the third party asset from the database.


It should be recognized, that the user interface is updated on a regular basis and that the user interface is streamed as a compressed video stream to the client device. Thus, over time the user interface changes. For example, at least the active tile is updated in every video frame or nearly every video frame to produce full-motion video within the active tile. Additionally, the script from the tile creation request may cause movement to occur within the video frame, such that tiles appear to move between a first position within the video frame and a second position within the video frame.


To ensure scalability, objects such as unencoded assets, encoded assets, encoded and unencoded tiles and graphical TV application elements are cached and reused between users' sessions as shown in FIG. 1A. A streaming engine 160 receives requests for the creation of one or more user interfaces. The requests are passed to an application engine 103 that operates in conjunction with an asset retrieval server (tile factory) 120. The application engine 103 determines the tiles and underlying assets that need to be retrieved for the creation of each user interface and sends one or more tile creation requests to the tile factory 120. The asset retrieval server (Tile Factory) 120 calculates a “fingerprint” (using a hash function) of the source information that determination an object. For encoded and unencoded assets, the source information is the reference to the asset (e.g. file name or URL) and metadata that is available for that asset such as file date or information from HTTP headers for the URL, such as the last modified date. For encoded and unencoded tiles, the source information is a tile creation request. To determine whether an object is present in the cache 123, the tile factory 120 compares the fingerprint of the source information with a list of stored hashes in a hash database or linked list 122 stored in working memory 121. The list of stored hashes relates objects that are cached to the location within the cache. If the fingerprint matches an entry in the list of stored hashes 122, the tile factory 120 retrieves the stored object, thereby avoiding the need to generate the tile from the tile creation request or make an external request for the asset or recall a TV application graphical element. Specifically, the platform calculates a hash of the tile creation request to determine if the encoded version of a tile is present in the cache. The hash may include the location of all source assets, animation information, location and size information although other combinations of information can be used to create a unique hash value for objects including assets and tiles. If no match exists, the platform executes the tile creation request, and stores the MPEG encoded result (an MPEG fragment) along with the hash of the tile creation request in cached memory 123. Consecutive requests for the same tile will match the stored hash, and the platform will immediately retrieve the stored MPEG fragment.


In general the reused assets reference common content provided to a plurality of users in a user interface or previously used assets for a user's interactive session.


The list of stored hashes may be a linked list or a database and may contain tuples of (the object, the hash value for the object). The hash of the tile creation request or the unencoded/encoded asset is searched using common hash table techniques. In some implementations, the hash tables may include an array where the hash value is used as an array index. The elements in the array contain a linked list of objects that have the hash value. In other embodiments, a long hash value is used that substantially guarantees that only a single object is associated with the hash value. In such a configuration, an array is not preferred and a linked list containing (hash value, object) tuples is preferred. The linked list uses a binary tree as index into the list. The position in the linked list of a certain has value is then obtained by performing a binary search. Alternatively, the linked list elements could be stored on a disk where the hash value is used as part of the file name. The computer operating system would perform a lookup of the hash value onto a file location on disk (e.g. an object with a hash value 2938479238 could be stored in a file having filename /var/cache/entry-2938479238.dat).


As expressed above, any hashing algorithm may be used to produce the hash value. One example for calculating a hash value of a tile creation request is to sum the bytes that constitute the request in computer memory, modulo a certain number that is the largest hash value (e.g. 2̂1024 for a 1024 bit hash value).


It should be recognized that the cache 123 may be cleaned up according to a least-recently used policy. The cache may be on disk or in memory and has a constrained size. Another policy may be to remove the least frequently used object in a certain time window (e.g. least frequently used object in the past hour). Those of ordinary skill in the art should understand that other cache policies may be used without deviating from the scope of the invention.



FIG. 2 is a flow chart that outlines the server-side creation of a user interface in real-time. First, an interactive session is established between a client device and the user interface generating platform 201. The user interface generating platform may be located at a cable television head-end or another remote location from the client device wherein the client device and the user interface generating platform are in communication by one or more networks. The establishment of an interactive session may be as the result of user interactive with the client device or may be established automatically with the client device. In response to establishment of the interactive session, a plurality of interactive animated assets are selected for presentation to the user based in part upon the user's profile 202. The user's profile may contain historical information about past selection and may be populated either directly by the user (e.g. information entry in forms) or through indirect entry (e.g. tracking user's selections and activity). A recommendation engine may assist in the selection of the assets as described above. A tile creation request is then generated that includes animation information and asset source specification for the one or more assets within a tile 203. The tile creation request will identify objects including encoded and unencoded tiles and underlying assets. Additionally, associated layout information and any scripting can be provided in the tile creation request or determined by the tile factory. A hash value is determined for the tile creation request 204. The hash value is stored and is used for comparison with a hash databases or linked list that contains pre-encoded items within a cache that can be reused without the need for encoding or re-encoding. For example, the cache may contain a plurality of MPEG-encoded elements that can be re-used and provided to a stitcher or stitching application. A video frame is then stitched together based upon the one or more tiles, the scripting animation, and assets that are in the form of encoded fragments 205. If the assets or tiles to be presented are in an unencoded form or need to be re-encoded, the assets and tiles are encoded/transcoded into encoded fragments. The encoded fragments are assembled in the encoded domain to form a complete video frame for the user interface. Each video frame is then transmitted in real-time to the requesting client device for display on a display device 206. The system transmits a sequence of video frames to the client device forming an encoded video stream (e.g. an MPEG elementary stream). The encoded video stream is decoded by the client device and displayed as a user-interface on the display. As previously indicated, the user interface will include animations and action wherein tiles will automatically become active and then inactive without user intervention. User intervention can change the flow of a script, allowing a user to interact with one or more tiles and view assets (previews, posters, and programs).



FIG. 3 shows an example user interface. The upper-left tile is displaying a trailer (asset) of the movie (source content), while the other three (Comedy, Pop, and Voetbal (football)) assets are stills. After some time, the upper-left tile stops animating and a different tile becomes active: it starts to play trailers and previews of TV shows. The audio follows the active (highlighted) tile.



FIG. 4 shows an alternative embodiment of the user interface. In this embodiment, the upper part of the screen displays a number of movie posters (ActiveTiles) where the leftmost tile is active. It is displayed with brighter colours than the non-active tiles, has a marker, and is enlarged. The trailer corresponding to the highlighted tile plays in the background. The trailer may not play until its end, depending on application settings the trailer may stop after a pre-set time and the next tile becomes active (i.e. its poster is highlighted, and its trailer starts playing). The lower part is a more traditional navigation bar, showing other categories ('tomorrow on RTL4′, ‘watch more thrillers’, and ‘tonight on TV’).


The embodiments of the invention described above are intended to be merely exemplary; numerous variations and modifications will be apparent to those skilled in the art. All such variations and modifications are intended to be within the scope of the present invention as defined in any appended claims.


It should be recognized by one of ordinary skill in the art that the foregoing methodology may be performed in a video processing environment and the environment may include one or more processors for processing computer code representative of the foregoing described methodology. The computer code may be embodied on a tangible computer readable storage medium i.e. a computer program product. Additionally, the functions of the modules in FIG. 1 may be distributed among a plurality of processors either local or remote from one another.


The present invention may be embodied in many different forms, including, but in no way limited to, computer program logic for use with a processor (e.g., a microprocessor, microcontroller, digital signal processor, or general purpose computer), programmable logic for use with a programmable logic device (e.g., a Field Programmable Gate Array (FPGA) or other PLD), discrete components, integrated circuitry (e.g., an Application Specific Integrated Circuit (ASIC)), or any other means including any combination thereof. In an embodiment of the present invention, predominantly all of the reordering logic may be implemented as a set of computer program instructions that is converted into a computer executable form, stored as such in a computer readable medium, and executed by a microprocessor within the array under the control of an operating system.


Computer program logic implementing all or part of the functionality previously described herein may be embodied in various forms, including, but in no way limited to, a source code form, a computer executable form, and various intermediate forms (e.g., forms generated by an assembler, compiler, networker, or locator.) Source code may include a series of computer program instructions implemented in any of various programming languages (e.g., an object code, an assembly language, or a high-level language such as Fortran, C, C++, JAVA, or HTML) for use with various operating systems or operating environments. The source code may define and use various data structures and communication messages. The source code may be in a computer executable form (e.g., via an interpreter), or the source code may be converted (e.g., via a translator, assembler, or compiler) into a computer executable form.


The computer program may be fixed in any form (e.g., source code form, computer executable form, or an intermediate form) either permanently or transitorily in a tangible storage medium, such as a semiconductor memory device (e.g., a RAM, ROM, PROM, EEPROM, or Flash-Programmable RAM), a magnetic memory device (e.g., a diskette or fixed disk), an optical memory device (e.g., a CD-ROM), a PC card (e.g., PCMCIA card), or other memory device. The computer program may be fixed in any form in a signal that is transmittable to a computer using any of various communication technologies, including, but in no way limited to, analog technologies, digital technologies, optical technologies, wireless technologies, networking technologies, and internetworking technologies. The computer program may be distributed in any form as a removable storage medium with accompanying printed or electronic documentation (e.g., shrink wrapped software or a magnetic tape), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the communication system (e.g., the Internet or World Wide Web.)


Hardware logic (including programmable logic for use with a programmable logic device) implementing all or part of the functionality previously described herein may be designed using traditional manual methods, or may be designed, captured, simulated, or documented electronically using various tools, such as Computer Aided Design (CAD), a hardware description language (e.g., VHDL or AHDL), or a PLD programming language (e.g., PALASM, ABEL, or CUPL.).

Claims
  • 1. A method for displaying a plurality of interactive animated assets associated with entertainment content on a television coupled to a client device through a communications network, the television associated with a user, the method comprising: establishing an interactive session between the client device and a user interface generating platform;in response to establishment of the interactive session, automatically identifying a plurality of interactive animated assets to present to the user based upon a user profile;generating a tile creation request including animation information and asset source specifications for the one or more assets within the tile;calculating a hash value of the tile creation request;stitching the video frame based on the TV application's screen layout with the plurality of interactive animated assets to form a sequence of encoded video frames defining a user interface; andtransmitting the sequence of encoded video frames to the client device associated with the user.
  • 2. The method according to claim 1 further comprising: comparing the hash value of the tile creation request with a hash value list of cached assets and if the hash value is within the hash list, retrieving the asset and providing the asset for stitching.
  • 3. The method according to claim 2, wherein if the hash value is not present in the cache list, storing the hash value and the associated asset in cache storage.
  • 4. The method according to claim 1, wherein the interactive animated assets include graphical components for a plurality of states.
  • 5. The method according to claim 4, wherein the graphical components are tiles that are smaller than an entire video frame.
  • 6. The method according to claim 4, wherein the first state is an active state and has a graphical component that is a full motion video and the second state is an inactive state that has a graphical component that is a still image.
  • 7. The method according to claim 1 further comprising: in accordance with the scripting information, encoding or transcoding an interactive animated asset or other graphical portion of the TV application screen.
  • 8. The method according to claim 1 wherein the user profile includes historical data based on the user's previous actions.
  • 9. The method according to claim 1, wherein in response to a user selection of a graphical component of an interactive animated asset causing the selected interactive animated asset to become active and to make all other assets within the user interface inactive.
  • 10. The method according to claim 1, wherein the communications network does not include a quality of service.
  • 11. The method according to claim 10 wherein the method occurs in substantially real-time.
  • 12. The method according to claim 1 wherein one or more of the interactive animated assets is pre-encoded in a stitchable format so that stitching takes place in the encoded domain.
  • 13. The method according to claim 1 wherein the interactive animated assets include macroblock encoded elements for stitching in the encoded domain.
  • 14. The method according to claim 1, further comprising: automatically activating one of the interactive animated asset into an active state causing the interactive animated asset to begin presentation of a full motion video sequence.
  • 15. The method according to claim 6, further comprising: after a predetermined amount of time, deactivating the active interactive animated asset into a non-active state and activating a different interactive animated asset into an active state.
  • 16. The method according to claim 6, wherein upon activation of an interactive animated asset, indicia of the activation is stitched with the corresponding graphical component into the sequence of encoded video frames.
  • 17. The method according to claim 6, wherein activation of a an interactive animated content asset causes an audio track associated with the activated interactive animated content asset to be transmitted with the sequence of video frames.
  • 18. The method according to claim 6, wherein activation of an interactive animated content asset may cause graphical portions of the TV applications screen layout to be replaced with the full motion video sequence
  • 19. The method according to claim 1, wherein an interactive animated asset in an inactive state is represented by a still image sized to fit within a tile.
  • 20. The method according to claim 1, further comprising: upon receipt of a signal indicative of selection of an interactive animated asset in an active state, accessing interactive functionality associated with the interactive animated asset causing the video frame sequence to change.
  • 21. A computer program product comprising a tangible computer-readable medium and computer code thereon for displaying a plurality of interactive animated assets on a television coupled to a client device through a communications network, the television associated with a user, the computer code comprising: computer code for establishing an interactive session between the client device and a user interface generating platform;computer code in response to establishment of the interactive session, automatically identifying a plurality of interactive animated assets to present to the user based upon a user profile;computer code for executing a TV application providing a layout for a video frame including a plurality of locations within the video frame for insertion of the interactive animated assets and the TV application also providing scripting information for the assets within the video frame;computer code for calculating a hash value of the tile creation request;computer code for stitching the video frame based on the TV application screen output with the plurality of interactive animated assets to form a sequence of encoded video frames; andcomputer code for transmitting the sequence of encoded video frames to the client device associated with the user.
  • 22. The computer program product according to claim 21 further comprising: computer code for comparing the hash value of the tile creation request with a hash value list of cached assets and if the hash value is within the hash list, retrieving the asset and providing the asset for stitching.
  • 23. The computer program product according to claim 22, wherein if the hash value is not present in the cache list, storing the hash value and the associated asset in cache storage.
  • 24. The computer program product according to claim 21, wherein the interactive animated assets include graphical components for a plurality of states.
  • 25. The computer program product according to claim 24, wherein the graphical components are tiles that are smaller than an entire video frame.
  • 26. The computer program product according to claim 24, wherein the first state is an active state and has a graphical component that is a full motion video and the second state is an inactive state that has a graphical component that is a still image.
  • 27. The computer program product according to claim 21 further comprising: computer code for encoding or transcoding an interactive animated asset in accordance with the scripting information.
  • 28. The computer program product according to claim 21 wherein the user profile includes historical data based on the user's previous actions.
  • 29. The computer program product according to claim 21 wherein one or more of the interactive animated assets is pre-encoded in a stitchable format so that stitching takes place in the encoded domain.
  • 30. The computer program product according to claim 21 wherein the interactive animated assets include macroblock encoded elements for stitching in the encoded domain.
  • 31. The computer program product according to claim 21, further comprising: computer code for automatically activating one of the interactive animated asset into an active state causing the interactive animated asset to begin presentation of a full motion video sequence.
  • 32. The computer program product according to claim 31, further comprising: computer code for deactivating the active interactive animated asset into a non-active state and activating a different interactive animated asset into an active state after a predetermined amount of time.
  • 33. The computer program product according to claim 31, wherein upon activation of an interactive animated content source, indicia of the activation is stitched into the sequence of encoded video frames.
  • 34. The computer program product according to claim, 31 further comprising: computer code for causing an audio track associated with the activated interactive animated content asset to be transmitted with the sequence of video frames upon activation of an interactive animated content asset.
  • 35. The computer program product according to claim 21, wherein an interactive animated content asset in an inactive state is represented by a still image sized to fit within a tile.
  • 36. The computer program product according to claim 21, further comprising: computer code for accessing interactive functionality associated with the interactive animated asset causing the video frame sequence to change upon receipt of a signal indicative of selection of an interactive animated asset in an active state.
PRIORITY CLAIM

The present U.S. Patent Application claims priority from U.S. Provisional Patent Application 61/584,538 filed on Jan. 9, 2012 entitled “Rendering of an Interactive Lean-Backward User Interface on a Television”, which is incorporated herein by reference in its entirety.

Provisional Applications (1)
Number Date Country
61584538 Jan 2012 US