The invention relates to a method for delivery of interactive content.
The known methods for delivering such content rely on a server, f.i. a VoD server, that allows linear content, f.i. video, to be played out in trick modes (forward, rewind, etc.). Trick mode support is generally done using additional content indexing. This indexing can be performed by VoD server itself. More advanced interactions (alternatives, non-linear scenarios, subset selection based on user interest, etc.) require a dedicated application to be created and downloaded on the client device. This requires a separate application to be created for each type of client (web, mobile, IPTV) and even for different devices of a certain client type (e.g. different IPTV settop boxes). As such, the same content item needs to be customized several times.
It is an object of the method according to the invention to allow more complex interactions and apply on-demand customization of content without the need to create a dedicated application for each type of client.
The method according to the invention realizes this object in that it includes the steps of:
In this way content is conveyed in a more flexible way between the content producer and the server entity in a network. The content producer defines possible actions in function of time as reflected in the actionmap, and the interpretation of the actions is done in the network by the server entity. Interactivity and customization is thus directly driven by the content producer and adaption to different client types is done in the network by the server entity. Interpretation of user action is done in a very flexible way since the actionmap contains the possible actions and these actions can be different in function of time.
A feature of an embodiment of the method according to the invention is that said control data comprises user interface data, said presentation information being based on said user interface data in combination with said time dependent action descriptors.
In this way the content production entity can deliver new and possibly customized formats to networked clients serviced by the server entity as opposed to the known systems where the look of the interactive content such as DVD content is always the same.
Still additional features of the embodiment of the method according to the invention are that said presentation information is based on server specific user interface data combined with said time dependent action descriptors or that said presentation information is in addition based on server specific user interface data
The content producer may define the different interactions and possibly the format in which they are presented, but in this way the look and feel in which they are shown to the client may still be customized in the network by the server entity, possibly based on the user interface data inside the content. As an example, content created by producer Warner Bros and provided by Belgacom network provider, may have interaction buttons in the look and feel of Belgacom.
Another feature of an embodiment of the method according to the invention is that said control data comprises markers on at least one timeline that identifies a content segment and that transmission of said identified content segment is determined in function of said action descriptor in combination with a correlation of said receipt time with said markers.
In this way different actions are performed dependent on the time location in the content.
The invention also relates to a production entity and to a content server entity realizing the subject method.
Embodiments of the method and its features, and of the production entity and of the server entity realizing these are hereafter described, by way of example only, and with reference to the accompanying figures where:
The system of
MXF as shown in
For each event allowed by the CP, AM defines a resulting action. This resulting action is time dependent. In other words, it depends on the position in the play-out of the multimedia clips. As a concrete example, suppose that the event received from a user via his remote control is translated to “jump to the next temporal mark” at a time instant before M2 on T2 (
As another example, suppose that the marked region M-in/M-out in
In the considered embodiment AM contains explicit actions. As an alternative, some “application profiles” may be defined, consisting of some predefined set of event-action pairs. In this case, AM may simply contain the application profile id. CP defines these profiles and they are known and stored by the server S.
In the considered embodiment the global meta-data also contains a user interface information block UI, but in an alternative embodiment the global meta-data can be limited to AM. UI contains layout indicators that enable S to create the lay out of a user interface for U1, U2 and U3.
In the considered embodiment a RTMP streamer RTMPP is used to target flash clients (U1), a MPEG-TS streamer MPEGTSP is used to target IPTV clients (U3) and an RTP streamer RTPP is used to target mobile clients (U2).
ELI loads the AM content from CS and AM info remains available as long as the user session and the instance exists.
Before any user can request a content item, an ingest process IP, as shown in
As shown in
In the considered embodiment S contains the execution descriptors (in LDB) as well as the MXF content (in CS). In an alternative embodiment the descriptor internally contains a link to MXF content located on a different server. Indeed, content query can be done on a server S1 containing the descriptor database, but the actual video pump (the server as described in
Using the execution descriptor, U1, U2 and U3 are then informed of the interaction/customization actions that are possible or allowed on the requested content.
Feedback events from U1, U2, U3 indicating the requested action are handled by an event mapper EM in S as shown in
AV retrieves the multimedia data from CS for streaming via the concerned streamer. In doing so it keeps track of the corresponding time location of the sent clips or segments by means of a time cursor (not shown) on the timelines T1 or T2. When receiving an action from EM, AV checks if this action implies a change in the cursor position and executes this change as explained earlier with respect to the use of the markers. Changes in the cursor position as a result of the retrieved action can happen immediately or may be remembered until the cursor hits another mark. E.g. while the cursor is in a non-skip-able region, a jump request to the next temporal mark, may not be executed. However, it can be remembered and executed at the moment the non-skip-able region is left. After a change of the cursor position, AV goes on feeding the concerned streamer with the retrieved data corresponding to the new location of the cursor.
Interactivity and customization is thus driven by the content producer in a very flexible way. As an example, an interactive news can be created with 3 different timelines, representing politics, culture and sports. Each timeline contains multiple clips. The AM can be defined such that for instance a ‘left’ arrow on a remote control used by a user of a user entity denotes skip to the next clip on the current timeline and that a n ‘up’ arrow denotes skip to the next timeline in a looped fashion.
It has to be noted that the above embodiments are described by way of their functionality rather than by a detailed implementation, because it should be obvious for a person skilled in the art to realize the implementation of the elements of the embodiments base on this functional description.
It has also to be noted that the above described functions may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Other hardware, conventional and/or custom, may also be included.
The above description and drawings merely illustrate the principles of the invention. It will thus be appreciated that, based on this description, those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, the examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited example and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific example thereof, are intended to encompass equivalents thereof.
Number | Date | Country | Kind |
---|---|---|---|
09290642.9 | Aug 2009 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2010/061671 | 8/11/2010 | WO | 00 | 2/21/2012 |