The present application is related to U.S. patent applications Ser. Nos. 13/457,520, 13/457,524, and 13/457,534 , filed on an even date herewith.
The present invention is related generally to processing communications that relate to a point or range within a multimedia presentation.
The consumption of media presentations by users (i.e., media consumers) is common. Here, “media presentation” or “multimedia presentation” refers to any digital content, including but not limited to video, audio, and interactive files. Also, “consumption” refers to any type of human interaction with a media presentation, including but not limited to watching, listening to, and interacting with the presentation.
Users can provide commentary upon points or periods of interest in a multimedia presentation. For example, a user may comment on an event occurring in the media presentation that is meaningful in the context of the multimedia presentation. Typically, users communicate these comments to other users. This communication, or sharing, of a user's comments may be achieved via, for example, online social media services (e.g., social networking websites), weblogs (i.e., “blogs”), and online forums.
The above considerations, and others, are addressed by the present invention, which can be understood by referring to the specification, drawings, and claims. According to aspects of the present invention, a communication originating from a user of an end-user device is processed. Using at least part of the communication, and using content information relating to content of the multimedia presentation, one or more content items may be selected. These selected content items may then be displayed to the user. The user may then include any of the content items in his communication.
The user's communication may be a comment or post made by the user and may relate to part of a multimedia presentation.
Content information may, for example, be extracted from the media presentation by performing a media-analysis process or may be provided by, for example, a provider of the media presentation. A content item may be a media item, a multimedia item, a reference to a media item, a reference to a multimedia item, or a web-page. A content item may also be a summary of any of the aforementioned content items or a link to any of the aforementioned content items.
The content items are selected (e.g., using a web-search engine) depending upon the content information and upon the user's communication. The content items may also be selected depending on a profile of the user, a profile of an intended recipient of the communication, context of the comment such as an earlier post, the point or range within the multimedia presentation with respect to which the communication is made, or a further communication (e.g., one made by the user or by another party and made in relation to the multimedia presentation, e.g., at a point or within a range within the multimedia presentation with respect to which the communication is made).
Preferably the selected content items are displayed to the user whilst the user composes his communication. Thus, the user may choose to include any of the selected content items in his communication.
While the appended claims set forth the features of the present invention with particularity, the invention, together with its objects and advantages, may be best understood from the following detailed description taken in conjunction with the accompanying drawings of which:
Turning to the drawings, wherein like reference numerals refer to like elements, the invention is illustrated as being implemented in a suitable environment. The following description is based on embodiments of the invention and should not be taken as limiting the invention with regard to alternative embodiments that are not explicitly described herein.
Apparatus for implementing any of the below described arrangements, and for performing any of the below described method steps, may be provided by configuring or adapting any suitable apparatus, for example one or more computers or other processing apparatus or processors, or providing additional modules. The apparatus may comprise a computer, a network of computers, or one or more processors, for implementing instructions and using data, including instructions and data in the form of a computer program or plurality of computer programs stored in or on a machine-readable storage medium such as computer memory, a computer disk, ROM, PROM, etc., or any combination of these or other storage media.
It should be noted that certain of the process steps depicted in the below described process flowcharts may be omitted or such process steps may be performed in differing order to that presented below and shown in those process flowcharts. Furthermore, although all the process steps have, for convenience and ease of understanding, been depicted as discrete temporally-sequential steps, nevertheless some of the process steps may in fact be performed simultaneously or at least overlapping to some extent temporally.
Referring now to the Figures,
A portion of the TV program 2 (hereinafter referred to as the “portion” and shown in
The representative network 14 comprises a service operator 16, a video server 18, a multimedia server 19, a set-top box 20, a television 22, a user 24, the Internet 26, a database 28, and a tablet computer 30.
The service operator 16 comprises apparatus that provides a television feed corresponding to the TV program 2. The service operator 16 is coupled to the video server 18 (e.g., either by a wireless or wired connection) such that, in operation, the service operator 16 provides the television feed to the video server 18.
The video server 18 (e.g., a cable head-end) is a facility for receiving, processing, and re-distributing television signals (e.g., the television feed). In addition to being coupled to the service operator 16, the video server 18 is coupled (e.g., either by a wireless or wired connection) to the multimedia server 19 and to the set-top box 20.
In operation, the video server 18 inter alia receives the television feed from the service operator 16. The video server 18 then processes the received television feed and distributes the processed feed (e.g., encoded in an appropriate multimedia container) to the multimedia server 19 and to the set-top box 20. The video server 18 and its functionality are described in more detail below with reference to
In addition to being coupled to the video server 18, the multimedia server 19 is coupled to the Internet 26, to the database 28, and to the tablet computer 30. The multimedia server 19 and its functionality are described in more detail below with reference to
In addition to being coupled to the video server 18, the set-top box 20 is also coupled to the TV 22. The set-top box 20 is a conventional device that, in operation, processes a multimedia container received from the video server 18 to provide content for display on the TV 22 (to the user 24).
The TV 22 is a conventional television on which, in operation, content from the set-top box 20 is displayed to the user 24.
The user 24 is a user of the TV 22, the database 28, and the tablet computer 30. For example, the user 24 may watch TV programs on the TV 22, store personal information in electronic form in the database 28, and browse the Internet 26 using the tablet computer 30. Furthermore, the user 24 is a subscriber to a service that allows the user 24 to upload comments and relevant information (e.g., onto the multimedia server 19) relating to media presentations that he has consumed or is currently consuming. For example, whilst the user 24 watches the TV program 2, the user 24 is able to upload (in electronic form) comments and content items relating to the TV program 2, e.g., relating to events occurring in the TV program 2, to the multimedia server 19 via the user's tablet computer 30 (as described in more detail below with reference to
The database 28 is storage for personal electronic information of the user 24. In other words, the database 28 is used by the user 24 to store the user's personal electronic content. The personal content stored by the user may include, but is not limited to, photographs, home-movies, and text documents. The database 28 may reside in a device used, owned, or operated by the user 24, for example a computer of the user 24, e.g., a laptop computer or the tablet computer 30.
The tablet computer 30 is a conventional tablet computer. In addition to being coupled to the multimedia server 19, the tablet computer 30 has access to the Internet 26 (e.g., via Wi-Fi Internet access). In other embodiments, a different type of device may replace the tablet computer 30, e.g., a different type of computer such as a laptop computer or a “smartphone.” The tablet computer 30 is configured to allow the user 24 to input a comment relating to a media presentation (e.g., the TV program 2) that the user 24 has consumed or is consuming. For example, the user 24 may type such a comment on to the tablet computer 30. The tablet computer 30 may then upload the user's comment and content items onto the multimedia server 19, e.g., via a wired or wireless link, or via the Internet 26. The tablet computer is described in more detail below with reference to
The video server 18 comprises a Transport Stream (TS) encoder 32, a transmitter 34, an Electronic Program Guide (EPG) service module 36, a closed-caption service module 38, and a media analysis module 40.
The TS encoder 32 is for encoding information into an appropriate Multimedia Container for transmission to the set-top box 20. The TS encoder 32 is coupled to the service provider 16 and to the transmitter 34 such that the TV feed may be received by it from the service provider 16, encoded, and delivered to the transmitter 34.
The transmitter 34 is coupled to the set-top box 20 such that information (e.g., the TV feed) encoded by the TS encoder 32 is transmitted to the set-top box 20.
The EPG service module 36 is a provider of broadcast programming or scheduling information for current and upcoming TV programs. The EPG service module 36 is coupled to the media analysis module 40 such that information may be sent from the EPG service module 36 to the media analysis module 40.
The closed-caption service module 38 is a provider of closed caption information (i.e., sub-titles) for current and upcoming TV programs. The closed captions may be in any appropriate language. The closed-caption service module 38 is coupled to the media analysis module 40 such that information may be sent from the closed-caption service module 38 to the media analysis module 40.
In addition to being coupled to the EPG service module 36 and to the closed-caption service module 38, the media analysis module 40 is coupled to the service operator 16 such that, in operation, it receives the TV feed provided by the service module 16. The media analysis module 40 is for analyzing and processing the received television feed, programming information, and closed caption information, as described in more detail below with reference to
The multimedia server 19 comprises a content recommendation module 42, a comment service module 44, a comment analysis module 46, and a further database 48.
The media analysis module 40 (of the video server 18) is coupled to the content recommendation module 42 such that output from the media analysis module 40 may be sent to the content recommendation module 42.
In addition to being coupled to the media analysis module 40, the content recommendation module 42 is coupled to the comment analysis module 46. This is so that the content recommendation module 42 may receive and process the output from the media analysis module 40 and output from the comment analysis module 46. Also, the content recommendation module 42 is coupled (e.g., by a wired or wired-less connection) to the Internet 26 and to the database 28. This is so that the content recommendation module 42 may retrieve content over the Internet 26 (e.g., from a web-server not shown in the Figures) and may retrieve content (e.g., the user's personal pictures or movies) from the database 28. Also, the content recommendation module 42 is coupled to the comment service module 44. This is so that output from the content recommendation module 42 may be sent to the comment service module 44. The operation of the content recommendation module 42 is described in more detail below with reference to
In addition to being coupled to the content recommendation module 42, the comment service module 44 is coupled (e.g., by a wired or wired-less connection) to the tablet computer 30. This is so that information may be sent from the tablet computer 30 to the comment service module 44 and vice versa. In other embodiments, the comment service module 44 may also be coupled to the TV 22 such that information may be sent from the TV 22 to the comment service module 44 and vice versa. Also, the comment service module 44 is coupled to the comment analysis module 46 and to the further database 48. This is so that information may be sent from the comment service module 44 to each of the comment analysis module 46 and the further database 48, as described in more detail below with reference to
The comment analysis module 46 is for analyzing and processing an output of the comment service module 44, as described in more detail below with reference to
The further database 48 is for storing data received by it from the comment service module 44.
The tablet computer 30 comprises a processor 80 and a display 82 (operatively connected to the processor 80).
The processor 80 is connected (e.g., via a wireless link) to the Internet 26 and to the multimedia server 19 so that the processor 80 may receive information over the Internet 26 and from the multimedia server 19. Also, the processor 80 is configured to send (i.e., transmit) information over the Internet 26 and to the multimedia server 19. Thus, the processor 80 acts as a transmitting module and a receiving module for the tablet computer 30.
The display 82 is a touch-screen display. The user 24 may input information to the tablet computer 30 using the display 82.
The processor 80 and the display 82 are coupled such that the processor 80 may send information to the display 82 for displaying to the user 24. Also, the processor 80 may receive information input to the tablet computer 30 by the user 24 using the display 82.
The processor 80 is arranged to process received information in accordance with the below described methods.
At step s2, the TV feed corresponding to the TV program 2 is sent from the service operator 16 to the TS encoder 32 and to the media analysis module 40 of the video server 18.
After step s2 the method proceeds to both step s4 and step s24. Step s24 is described in more detail below after a description of steps s4 through s22.
At step s4, the TS encoder 32 encodes the television feed for the TV program 2 that it received from the service operator 16. The encoded television feed may be inserted into an appropriate Multimedia Container. Examples of a Multimedia Container include the MPEG Transport Stream, Flash Video, and QuickTime. The MPEG Transport Stream may comprise, for example, audio elementary streams, video elementary streams, closed-caption or subtitle elementary streams, and a Program Address table. The TV feed, as it may comprise text, audio, video, and other type of information (such as graphical composition and animation information), may be separately carried by other types of elemental streams in the same transport stream, such as the video, audio, text, and private user data elemental streams.
At step s6, the encoded Multimedia Container is sent from the TS encoder 32, via the transmitter 34, to the set-top box 20.
At step s8, the set-top box 20 decodes the received Multimedia Container.
At step s10, the content (from the decoded Multimedia Container), i.e., the TV feed, is displayed to the user 24 on the TV 22.
At step s12, as the user 24 watches the TV program 2, the user 24 types (or inputs in some other way) a comment relating to the TV program 2 (e.g., to events occurring in the TV program 2) on to the tablet computer 30 (using the display 82).
At step s14, the user's comment relating to the TV program 2 is transmitted from the tablet computer 30 (e.g., via the processor 80) to the comment service module 44 of the multimedia server 19.
The comment may be sent from the tablet computer 30 as the user 24 is inputting the comment, i.e., a partial comment may be transmitted. This may be done such that the steps s16 through s38 described below are performed whilst the user 24 is still composing the comment. Alternatively the comment may be transmitted from the tablet computer 30 after the user 24 has finished composing it.
Furthermore, at step s14, a first time-stamp relating to the comment is transmitted from the tablet computer 30 to the comment service module 44 of the multimedia server 19. This first time-stamp is indicative of a point in time in the TV program 2 (i.e., a time between the start time 4, t=0, and the end time 6, t=T, of the TV program 2) corresponding to which the user 24 makes the comment. For example, the first time-stamp may indicate the time in the TV program 2 that a certain event occurs, the certain event being an event with respect to which the user 24 makes the comment.
Furthermore, at step s14, a second time-stamp relating to the comment is transmitted from the tablet computer 30 to the comment service module 44 of the multimedia server 19. This second time-stamp is indicative of a date and time of day at which the user 24 makes the comment.
At step s16, the user's comment and the associated first and second time-stamps are stored by the comment service module 44 at the further database 48.
At step s18, the user's comment is sent from the comment service module 44 to the comment analysis module 46.
At step s20, the comment analysis module 46 analyses the received user's comment. Any appropriate process for analyzing the user's comment may be implemented. For example, conventional keyword detection processes and parsing processes may be used to analyze a text form of the user's comment.
At step s22, the results of the analysis of the user's comment performed by the comment analysis module 46 (e.g., the extracted key-words or the parsed comment) is sent from the comment analysis module 46 to the content recommendation module 42.
After step s22, the method proceeds to step s32, which is described below after the description of steps s24 through s30.
At step s24, the EPG service module 36 sends metadata corresponding to the TV program 2 (e.g., the start time 4 and end time 6 of the TV program 2, the type of the TV program 2, genre of the TV program 2, cast and crew names of the TV program 2, the parental advisory rating of the TV program 2, etc.) to the media analysis module 40.
At step s26, the closed-caption service module 38 sends closed caption information for the TV program 2, in the appropriate languages, to the media analysis module 40.
At step s28, using the metadata received from the EPG service module 36, the closed caption information received from the closed-caption service module 38, and the TV feed received from the service operator 16, the media analysis module 40 analyses the television feed to detect content events, hereinafter referred to as “events,” within the TV program 2. The terminology “event” is used herein to refer to a point or period of interest in a multimedia presentation (e.g., the TV program 2). The point or period of interest in the multimedia presentation is meaningful in the context of the multimedia presentation. Examples of events include an action sequence in a movie, a home-run in a televised baseball game, a goal in a televised soccer match, and a commercial break in a TV program. The analysis performed by the media analysis module 40 is any appropriate type of analysis such that events occurring within the TV program 2 may be detected. For example, a conventional process of analyzing audio, video, or closed caption (sub-title) text to detect content events may be used. For example, a period of increased sound levels in a televised soccer match tends to be indicative of an event (e.g., a goal). Also, for example, an instance of a completely black screen in a movie tends to be indicative of an event (e.g., a cut from one scene to the next).
At step s30, information relating to one or more of the detected events (e.g., the type of event, closed caption data corresponding to an event, the time the event occurred with respect to the TV program, the duration of the event, etc.) is sent from the media analysis module 40 to the content recommendation module 42.
At step s32, using the comment analysis received from the comment analysis module 46 and the event information received from the media analysis module 40, the content recommendation module 42 identifies one or more subjects, or topics, related to the user's comment or related to the event occurring in the TV program 2 that the user 24 is commenting on. This may be performed, for example, by comparing the analysis of the user's comment with the information relating to that event. For example, keywords extracted from the user's comment may be compared to the closed captions that relate to that event or to a text description of that event.
The identification of the one or more topics may further depend on user behaviour (e.g., the user's web-browsing history, etc.) or properties of the user 24 (e.g., a profile of the user 24, the user's likes, dislikes, hobbies, interests, etc.). The identification of the one or more topics may instead, or also, depend upon properties of the intended recipients of the comment (i.e., the party or parties that the user 24 intends, i.e., has indicated, the comment to be viewed by). The identification of the one or more topics may instead, or also, depend upon previous comments made by the user or by a different party (e.g., made in relation to the multimedia presentation).
At step s34, the content recommendation module 42 identifies content that relates to the one or more subjects or topics identified at step s32. This may be performed by the content recommendation module 42 searching on the Internet 26 (e.g., using a Web search engine) for information related to those subjects. Also, the content recommendation module 42 may search the database 28 for information (e.g., pictures or documents) related to those subjects. The content recommendation module 42 selects some or all of the identified content items. For example, the content recommendation module 42 may download some or all of the identified content items. Also, the content recommendation module 42 may compile a list of links or references (e.g., hyperlinks) to some or all of the identified content items.
The selection of some or all of the identified content items may depend on one or more other factors, e.g., user behaviour (e.g., the user's web-browsing history, etc.), properties of the user 24 (e.g., a profile of the user 24, the user's likes, dislikes, hobbies, interests, etc.), properties of the intended recipients of the comment (i.e., the party or parties that the user 24 intends, i.e., has indicated, the comment to be viewed by), or previous comments made by the user or by a different party (e.g., made in relation to the multimedia presentation).
The set of all the downloaded or selected content items and the list of references is hereinafter referred to as the “recommended content.” The terminology “recommended content” refers to content that is identified and selected by the content recommendation module 42 and will be recommended to the user 24 as being relevant to, e.g., the comment or the event in relation to which the comment is made. The recommended content may include, but is not limited to, links to websites, news articles, and downloaded documents or pictures (e.g., maps downloaded from a web-server via the Internet 26 or personal pictures retrieved from the database 28).
The recommended content may also comprise, for one or more of the recommended content items (i.e., the downloaded content items and the recommended links) a recommended time in the TV program 2. A recommended time for a content item may be a time in the TV program that an event occurs to which the content item is related. A recommended time for a content item may be determined by the content recommendation module 42 (or by a different module) by, for example, analyzing the comment analysis (i.e., the output of the comment analysis module 46) and the event information (i.e., the output from the media analysis module 40). Also for example, a recommended time for a content item may be determined by analyzing comments of other users (e.g., that may be stored in the further database 48).
At step s36, the recommended content is sent from the content recommendation module 42 to the tablet computer 30 (via the comment service module 44). In other embodiments, the recommended content is sent from the content recommendation module 42 to the TV 22 for display to the user 24.
At step s38, the recommended content is displayed to the user 24 on the tablet computer 30 (on the display 82). In other embodiments, the recommended content is displayed on a different device, e.g., the TV 22. The recommended content may be displayed to the user 24 whilst the user 24 is still composing the comment or after the user 24 has initially finished composing the comment.
In other examples, a number of comments made by the user 24 may be displayed in the first UI 50 (e.g., in a number of other comment boxes). Recommended content relating to some or all of these other comments may also be displayed in the first UI 50 (e.g., in a number of other recommendations boxes).
Thus, content items that relate to a comment that the user 24 is in the process of composing may advantageously be displayed to the user 24 (as recommended content) as that comment is being composed. The user 24 may select (in a relatively simple and intuitive way) one or more of the recommended items for inclusion in the comment. The recommended content items advantageously tend to relate to both the comment being composed and to the events occurring in the TV program 2 at that time.
Returning now to
At step s42, the updated comment is sent from the tablet computer 30 to the comment service module 44.
At step s44, the comment stored in the further database 48 at step s16 is updated so that it is the same as the updated comment, i.e., such that it contains any recommended content included by the user 24 at step s40. This may be achieved in any appropriate way, e.g., by overwriting the original comment with the updated comment. In other embodiments, the original comment is not stored at step s16, and only once the comment has been updated to include any recommended content items desired by the user 24 is the comment stored.
Thus, at step s44 the updated comment (including recommended content) and the first and second time-stamps corresponding to that comment are stored at the multimedia server 19 in the further database 48.
Using the method described above with respect to
Next described is an example way of using the information stored in the further database 48 (i.e., the stored comment and time-stamps).
At step s46, a further user records (e.g., using a digital video recording device) the TV program 2. The further user is a subscriber to the service that allows him to access comments stored on the further database 48 (made, e.g., by the user 24) and to upload his own comments to the further database 48. The TV program 2 is to be watched by the further user at a time that is after the TV program 2 has been watched and commented on by the user 24.
At step s48, sometime after the TV program 2 has been recorded by the further user, the further user watches (e.g., on a further TV) the recording of the TV program 2.
At step s50, whilst watching the recording of the TV program 2, the further user displays (e.g., on a further tablet computer or other companion device, of which he is a user) the information stored in the further database 48. This information may be displayed as described below with reference to
Similar to the first UI 50 described above with reference to
The second UI 60 further comprises a comment box 58 which displays the user's updated comment 64 (i.e., the user's comment including any recommended content he wishes to include). The comment box 58 also displays the first time-stamp 66 (i.e., the time-stamp indicative of the time relative to the TV program 2 at which the user 24 makes the comment). The comment box 58 comprises a marker 68. The position of this marker 68 with respect to the progress bar 54 corresponds to the position of the first time-stamp 66 in the TV program 2. In other words, if the first time-stamp 66 indicates that the updated comment 64 corresponds to a time t=ti in the TV program 2, then the marker 68 of the comment box 58 that contains the updated comment 64 is aligned with the time t=ti of the progress bar 54. Also, the comment box 58 displays the second time-stamp 70 (i.e., the time-stamp indicative of the date and time of day at which the user 24 made the updated comment 64).
Thus, as the further user watches the TV program 2, he is able to view the comments the user 24 made on the TV program 2 (including any content the user 24 included in the comments). The further user is also able to view the time with respect to the TV program 2 each comment was made by the user 24. The further user is also able to view the respective absolute time (i.e., date and time of day) at which each comment was made by the user 24. Thus, the user 24 is able to communicate his opinions on the TV program 2 (and any related content items) relatively easily to the further user.
The second UI 60 advantageously provides a relatively simple and intuitive user interface. This interface may be used by a user to view or interact with comments and other information provided by other users and relating to a media presentation. The provided comments and related information may be consumed by a user before, during, or after the consumption of the related media presentation.
The second UI 60 further provides an interface using which the user 24 may edit a comment or a timestamp associated with a comment. The comment to be edited, or the comment associated with the timestamp to be edited, may have been made by any user (i.e., by the user performing the editing or by a different user). Methods of performing such editing are described below with reference to
At step s52, the user 24 selects the marker 68 (corresponding to the timestamp to be edited) of the relevant comment box 58. This may be performed in any appropriate way. For example, if the second UI 60 is displayed on a touch-screen display (e.g., the display 82 of the tablet computer 30), then the user 24 may select the marker 68 by touching it (e.g., with a finger). Also for example, the marker 68 may be selected by the user 24 moving a mouse cursor over the marker and clicking a mouse button.
At step s54, once the marker 68 has been selected, the user 24 repositions the marker 68 (e.g., by using a “drag and drop” technique or by sliding the marker 68 along the top of its comment box 58) from an initial position (where the marker 68 is aligned with the point t=tj on the progress bar 54) to its new position (where the marker 68 is aligned with the point t=tk on the progress bar 54).
At step s56, the user 24 may de-select the marker 68 (e.g., by performing a further action).
Thus, the first timestamp 66 of a comment may advantageously be altered by a user 24 in a relatively simple and intuitive manner.
At step s58, the user selects the marker 68 corresponding to the comment box 58 he wishes to associate with the additional further time-step. In other words, the user 24 selects the marker 68 corresponding to t=ta. This may be performed in any appropriate way, for example, as described above with reference to step s52 of
At step s60, once the marker 68 has been selected, the user 24 creates a duplicate of the marker 68. This may be performed in any appropriate way. For example, the marker 68 may be copied using a conventional “copy and paste” technique or by performing some other action (e.g., tapping twice on a touch-screen display).
At step s62, the user moves one copy of the marker 68 to a new desired location. This may be performed using any appropriate process. For example, one or both copies of the marker 68 may be moved to desired positions by performing steps s52 through s56, as described above with reference to
Thus, a single comment may be associated with a plurality of time-steps within the TV program 2 (e.g., by repeatedly performing the process of
The second UI 60 further provides an interface using which the user 24 may view comments (made by himself or by other users) made in relation to part of the TV program 2. The part of the TV program may be specified by the user 24. An example method of performing such a process is described below with reference to
At step s64, the user 24 selects a start (i.e., earliest) time-step for the part of the TV program 2 that he wishes to view the comments made in relation to. In this example, the user 24 selects the time-step t=tc. This may be performed in any appropriate way. For example, if the second UI 60 is displayed on a touch-screen display (e.g., of the tablet computer 30), then the user 24 may select the start-time by touching the progress bar 54 at the point corresponding to the desired start-time. The user 24 may change this selected time-step by, for example, sliding his finger to a new position on the progress bar 54 corresponding to the desired new start-time. Also for example, the start-time may be selected by the user 24 moving a mouse cursor over the point on the progress bar 54 corresponding to the desired start-time and clicking a mouse button.
At step s66, the user 24 selects an end (i.e., latest) time-step for the part of the TV program 2 that he wishes to view the comments made in relation to. In this example, the user 24 selects the time-step t=td. This may be performed in any appropriate way. For example, if the second UI 60 is displayed on a touch-screen display (e.g., of the tablet computer 30), then the user 24 may select the end-time by touching the progress bar 54 at the point corresponding to the desired end-time. The user 24 may change this selected time-step by, for example, sliding his finger to a new position on the progress bar 54 corresponding to the desired new end-time. Also for example, the end-time may be selected by the user 24 moving a mouse cursor over the point on the progress bar 54 corresponding to the desired end-time, and clicking a mouse button.
Steps s64 and s66 may advantageously be performed simultaneously in a relatively simple and intuitive manner. For example, if the second UI 60 is displayed on a touch-screen display (e.g., of the tablet computer 30), then the user 24 may select the start-time by touching the progress bar 54 at that corresponding point with one digit (e.g., the user's thumb), and simultaneously (or before or afterwards) the user 24 may select the end-time by touching the progress bar 54 at that corresponding point with another digit on the same hand (e.g., the user's forefinger). The user 24 may advantageously change the selected start or end time by, for example, sliding either or both of his digits along the progress bar 54 so that they contact new positions on the progress bar 54 corresponding to the desired new start and end times. The user 24 also change the length of the selected part by moving his digits (with which he has chosen the start and end times) closer together or further part. Thus, the user 24 may use a relatively simple “pinching” action (on the progress bar 54) to specify, i.e., select (and, if desired by the user 24, alter) the start and end times of a part of the TV program 2.
Thus, the user 24 effectively specifies a range of time (i.e., tc to td) within the TV program 2.
At step s68, the tablet computer 30 displays all the comment boxes 58 (and their respective contents) that have a first timestamp 66 that is within the specified range of time (i.e., between the start-time t=tc and the end-time t=td). A summary of the information contained in those comment boxes 58 may be displayed instead of or in addition to the comment boxes 58 themselves. The comment boxes 58 that are displayed at step s68 may be selected, e.g., from the set of all comment boxes 58, in any appropriate way, e.g., by the processor 80 of the tablet computer 30.
Thus, a relatively simple process by which the user 24 can display comments (or a summary of the comments) relating to a specified part 74 of the TV program 2 is provided. This advantageously provides a method that may be used to generate a summary of the part 74 of the TV program 2. For example, the user 24 may just read the comments (or the summary of the comments) relating to the specified portion 74 and gain an understanding of the events that occur in that portion 74, without the user 24 having to watch (or consume) it.
Additional criteria that specify which comments or comment boxes 58 (e.g., that relate to the portion 74) should be displayed to the user 24, or how a summary of the comments that relate to the portion 74 is generated, may also be specified (e.g., by the user 24). For example, the user 24 may specify that he only wishes to have displayed to him, or summarized to him, comments or comment boxes 58 that relate to the portion 74 and that have been made by members of a certain group of other users. Also, for example, the user 24 may specify that he only wishes to have displayed to him, or summarized to him, comments that have been “liked” by other users.
Thus, a user may use a pinching or squeezing action on a timeline to define a time interval. The system then summarizes or filters posts and comments corresponding to that interval. For example, the system may display a summary of all “liked” posts and comments that relate to a point in that time interval. Also for example, a selection of video clips associated with the posts and comments may be generated and compiled into a video summary of the interval.
The selected interval may be stretched or shrunk (e.g., using a pinching motion), thereby changing the content that is displayed to the user 24. In this way, video editing may be performed, i.e., a video summary of the selected interval may be generated and changed by changing the interval. This video summarization of the selected interval may be a video summary of comments relating to the interval, i.e., the video summarization may be driven by secondary content.
The information provided to the user 24 (i.e., the comments, summary of the comments, or summary of the video content) changes as the user 24 changes the selected interval of the media presentation. This information may be presented to the user 24 in any appropriate way. For example, comments may ordered according to a rating given to those comments by other users. Also for example, a summary of posts and comment may be presented to the user 24, e.g., as a text summary of postings and comments. Also for example, the harvesting or compiling of video clips from the media presentation to create a video summary of the interval may use user posts and comments to determine what visual material to include in the video summary.
In the above embodiments, the above described methods are implemented in the example network 14 described above with reference to
In view of the many possible embodiments to which the principles of the present invention may be applied, it should be recognized that the embodiments described herein with respect to the drawing figures are meant to be illustrative only and should not be taken as limiting the scope of the invention. Therefore, the invention as described herein contemplates all such embodiments as may come within the scope of the following claims and equivalents thereof.
Number | Name | Date | Kind |
---|---|---|---|
5874965 | Takai et al. | Feb 1999 | A |
6404978 | Abe | Jun 2002 | B1 |
6486896 | Ubillos | Nov 2002 | B1 |
6711293 | Lowe | Mar 2004 | B1 |
7050603 | Rhoads et al. | May 2006 | B2 |
7058891 | O'Neal et al. | Jun 2006 | B2 |
7477268 | Veolia | Jan 2009 | B2 |
7836044 | Kamvar et al. | Nov 2010 | B2 |
7895625 | Bryan | Feb 2011 | B1 |
7934233 | Zimmerman et al. | Apr 2011 | B2 |
7944445 | Schorr et al. | May 2011 | B1 |
7956847 | Christie | Jun 2011 | B2 |
8595375 | Kuznetsov | Nov 2013 | B1 |
9384512 | McClements, IV | Jul 2016 | B2 |
20030074671 | Murakami | Apr 2003 | A1 |
20030182663 | Gudorf et al. | Sep 2003 | A1 |
20060150218 | Lazar | Jul 2006 | A1 |
20060282776 | Farmer et al. | Dec 2006 | A1 |
20070038458 | Park | Feb 2007 | A1 |
20070112837 | Houh | May 2007 | A1 |
20070294374 | Tamori | Dec 2007 | A1 |
20080046925 | Lee et al. | Feb 2008 | A1 |
20080118219 | Chang | May 2008 | A1 |
20080180394 | Yun et al. | Jul 2008 | A1 |
20080266449 | Rathod | Oct 2008 | A1 |
20080294694 | Maghfourian et al. | Nov 2008 | A1 |
20090055742 | Nordhagen | Feb 2009 | A1 |
20090070699 | Birkill et al. | Mar 2009 | A1 |
20090083781 | Yang et al. | Mar 2009 | A1 |
20090116817 | Kim et al. | May 2009 | A1 |
20090164301 | O'Sullivan et al. | Jun 2009 | A1 |
20090164904 | Horowitz | Jun 2009 | A1 |
20090164933 | Pederson et al. | Jun 2009 | A1 |
20090193032 | Pyper | Jul 2009 | A1 |
20090199251 | Baoiu | Aug 2009 | A1 |
20090210779 | Badoiu | Aug 2009 | A1 |
20090217352 | Shen | Aug 2009 | A1 |
20090238460 | Funayama et al. | Sep 2009 | A1 |
20090265737 | Issa et al. | Oct 2009 | A1 |
20090271524 | Davi et al. | Oct 2009 | A1 |
20090297118 | Fink | Dec 2009 | A1 |
20090300475 | Fink | Dec 2009 | A1 |
20100057694 | Kunjithapatham | Mar 2010 | A1 |
20100058253 | Son | Mar 2010 | A1 |
20100122174 | Snibbe et al. | May 2010 | A1 |
20100162303 | Cassanova | Jun 2010 | A1 |
20100185984 | Wright et al. | Jul 2010 | A1 |
20100209077 | Park | Aug 2010 | A1 |
20100218228 | Walter | Aug 2010 | A1 |
20100241968 | Tarara | Sep 2010 | A1 |
20100262909 | Hsieh | Oct 2010 | A1 |
20100262912 | Cha | Oct 2010 | A1 |
20100281108 | Cohen | Nov 2010 | A1 |
20100287473 | Recesso | Nov 2010 | A1 |
20100318520 | Loeb et al. | Dec 2010 | A1 |
20110021251 | Linden | Jan 2011 | A1 |
20110022589 | Bauer | Jan 2011 | A1 |
20110041080 | Fleischman et al. | Feb 2011 | A1 |
20110099490 | Barraclough et al. | Apr 2011 | A1 |
20110113444 | Popovich | May 2011 | A1 |
20110126105 | Isozu | May 2011 | A1 |
20110158605 | Bliss et al. | Jun 2011 | A1 |
20110162002 | Jones | Jun 2011 | A1 |
20110167347 | Joo et al. | Jul 2011 | A1 |
20110181779 | Park | Jul 2011 | A1 |
20110214090 | Yee et al. | Sep 2011 | A1 |
20110238495 | Kang | Sep 2011 | A1 |
20110254800 | Anzures et al. | Oct 2011 | A1 |
20110267422 | Garcia | Nov 2011 | A1 |
20110283188 | Farrenkopf | Nov 2011 | A1 |
20120042246 | Schwesinger | Feb 2012 | A1 |
20120042766 | Spata | Feb 2012 | A1 |
20120047437 | Chan | Feb 2012 | A1 |
20120047529 | Schultz | Feb 2012 | A1 |
20120151347 | McClements, IV | Jun 2012 | A1 |
20120167145 | Incorvia | Jun 2012 | A1 |
20120210220 | Pendergast et al. | Aug 2012 | A1 |
20120223898 | Watanabe | Sep 2012 | A1 |
20120227073 | Hosein | Sep 2012 | A1 |
20120296782 | Tsai | Nov 2012 | A1 |
20120320091 | Rajaraman et al. | Dec 2012 | A1 |
20130004138 | Kilar et al. | Jan 2013 | A1 |
20130014155 | Clarke | Jan 2013 | A1 |
20130016955 | Pejaver | Jan 2013 | A1 |
20130019263 | Ferren | Jan 2013 | A1 |
20130097285 | van Zwol et al. | Apr 2013 | A1 |
20130097551 | Hogan | Apr 2013 | A1 |
20130145269 | Latulipe | Jun 2013 | A1 |
20130159858 | Joffray et al. | Jun 2013 | A1 |
20130174195 | Witenstein-Weaver | Jul 2013 | A1 |
20130215279 | Rivas-Micoud et al. | Aug 2013 | A1 |
20130254816 | Kennedy et al. | Sep 2013 | A1 |
20130290488 | Mandalia | Oct 2013 | A1 |
20140059434 | Anzures et al. | Feb 2014 | A1 |
20140188259 | Lu | Jul 2014 | A1 |
20140215337 | Park et al. | Jul 2014 | A1 |
20150033153 | Knysz | Jan 2015 | A1 |
20150170172 | Gichuhi | Jun 2015 | A1 |
20150378503 | Seo | Dec 2015 | A1 |
20150382047 | Van Os | Dec 2015 | A1 |
Number | Date | Country |
---|---|---|
101345852 | Jan 2009 | CN |
101510995 | Aug 2009 | CN |
101867517 | Oct 2010 | CN |
Entry |
---|
PCT Search Report & Written Opinion, Re: Application #PCT/US2013/035702, dated Jun. 20, 2013. |
PCT Search Report & Written Opinion, Re: Application #PCT/US2013/035714, dated Jun. 20, 2013. |
Barnes C. et al.: “Video Tapestries with Continuous Temporal Zoom”, In ACM Transactions on Graphics (Proc. SIGGRAPH). 29(3), Aug. 2010, all pages. |
Pongnumkul S. et al.: “Content-Aware Dynamic Timeline for Video Browsing”, ACM Symposium on User Interface Software and Technology (UIST), Feb. 1, 2010, all pages. |
Anvil—Video annotation research tool (schema for annotations and content separate from primary content; URL: http:www.anvil-software.de/, accessed on Feb. 6, 2019. |
Number | Date | Country | |
---|---|---|---|
20130290488 A1 | Oct 2013 | US |