The present disclosure is generally related to creating a media content summary based on viewer annotations.
Summaries of media content can enable users to make decisions about whether to examine a particular media content in greater detail. Browsing summaries of media content can also support incidental learning, enabling users to view the most interesting portions of the media content without viewing the media content in its entirety. A media summary may be created by an expert who selects portions of the media content based on his or her expert opinion. For example, a movie trailer expert may select segments of a movie to create a movie trailer. A video-on-demand service may enable users to view movie trailers associated with movies available on-demand to enable the users to determine whether or not to purchase a movie for viewing.
In a particular embodiment, a computer implemented method includes receiving, at a media server, a request from a first user to create a media content summary including select segments of media content. The computer implemented method includes accessing a media content analyzer generated based on viewer annotation data associated with media content. The viewer annotation data is received from viewers other than the first viewer. The computer implemented method includes identifying segments of the media content using the media content analyzer. The computer implemented method includes selecting at least one segment of the identified segments of the media content for inclusion in the media content summary. The computer implemented method includes sending the at least one selected segment of the media content to a device associated with the user.
In another particular embodiment, a tangible computer-readable storage medium includes operational instructions, that when executed by a processor, cause the processor to receive from a user a user request to create a media content summary including select segments of media content. The user request includes user specified criteria. The tangible computer-readable storage medium includes operational instructions, that when executed by the processor, cause the processor to identify segments of the media content using a media content analyzer. The media content analyzer is generated based on viewer annotation data received from viewers other than the user. The tangible computer-readable storage medium includes operational instructions, that when executed by the processor, cause the processor to select at least one segment of the identified segments of the media content for inclusion in the media content summary based on the user specified criteria. The tangible computer-readable storage medium includes operational instructions, that when executed by the processor, cause the processor to send the media content summary for presentation at a display device.
In another particular embodiment, a system includes an input interface to receive a request from a user to generate a media content analyzer to identify segments of media content and to generate a media content summary based on the identified segments of media content. The system includes a generator module to generate the media content analyzer based on viewer annotation data that is received from viewers different than the user. The system includes a selection module to select at least one segment of the identified segments for inclusion in the media content summary based on user specified criteria. The system includes an output interface to send the media content summary including the at least one selected segment to a display device.
In another particular embodiment, a computer implemented method includes receiving, at a media server, a request from a user to generate a media content analyzer. The computer implemented method includes analyzing a first set of viewer annotations associated with media content to determine at least one viewer response metric associated with the media content. The first set of viewer annotations are generated by viewers other than the user. The computer implemented method includes accessing media content analysis information associated with the media content. The computer implemented method includes generating the media content analyzer adapted to approximate the media content analysis information based on the at least one viewer response metric. The computer implemented method includes storing the media content analyzer in a memory.
Referring to
The set-top box device 102 is operable to receive a media content summary 156 from the media server 104 via the network 106 and to display the media content summary 156 at the display device 154. In a particular embodiment, the media content summary 156 is a summary of the media content 112 that includes particular automatically selected segments of the media content 112. A user 140 may use the set-top box device 102 to interact with the media server 104. For example, the user 140 may use a remote control device 152 to send a user request 142 to a media content analyzer 110 of the media server 104 to identify segments 114, 115, 116, and 117 of the media content 112 for inclusion in the media content summary 156. The user 140 may control how the segments 114-117 of the media content summary 156 are displayed at the display device 154 using the remote control device 152.
The media server 104 may include the media content analyzer 110, viewer annotation data 108, and the media content 112. Multiple viewers (not shown) other than the user 140 may view the media content 112 and input individual viewer annotations that are collected to form the viewer annotation data 108, as is described in more detail in the description of
The media content analyzer 110 may be generated based on the viewer annotation data 108 associated with the media content 112, as discussed in more detail below in the description of
In operation, the user 140 may send the user request 142 to the media server 104. The user request 142 may include user specified criteria 144. The user specified criteria 144 may include a time constraint 146 and a content type 148. The content type 148 may specify to the media content analyzer 110 to identify segments including the content type 148. For example, the content type 148 may indicate that the user 140 wishes to view segments of the media content 112 that includes players scoring during a sporting event, action scenes of an action program, or humorous scenes of a comedy program. To illustrate, the content type 148 may indicate that the user 140 wishes to view portions of the media content 112 that other viewers annotated as including a humorous type of content. The time constraint 146 may specify a time within which the user 140 wishes to view the identified segments 114-117 of the media content 112. For example, the time constraint 146 may indicate that the portions of the media content 112 annotated as including humorous content have a viewing time of no more than fifteen minutes.
The media content analyzer 110 may receive the user request 142 and may automatically identify the segments 114-117 of the media content 112 for inclusion in the media content summary 156 based on the user specified criteria 144. In a particular embodiment, the media content analyzer 110 may sort the segments 114-117 of the media content 112 based on the user specified criteria 144 before selecting the selected segments of the media content for inclusion in the media content summary 156. For example, when the media content analyzer 110 determines that a total viewing time of the segments 114-117 is greater than the time constraint 146, the media content analyzer 110 may sort the segments 114-117 based on the size of each segment or the number of user annotations for each segment 114-117 and select some but not all of the segments 114-117 for inclusion in the media content summary 156 based on the time constraint 146.
In a particular embodiment, when the user 140 is viewing a particular segment of the media content summary 156 and does not wish to view a remainder of the segment, the user 140 may view the next segment by sending a skip function 150 to the media server using the remote control device 152. The media server 104 may receive the skip function 150 after sending at least a portion of at least one selected segment of the media content 112 to the display device 154. The media server 104 may send a next selected segment to the display device 154 in response to receiving the skip function 150. For example, when the user 140 is viewing the first segment 114 and does not wish to view the remainder of the segment 114, the user 140 may send the skip function 150 to the media server 104 request the media server 104 to send the second segment 116 of the media content summary 156 to the set-top box device 102 for display at the display device 154.
By using the media content analyzer 110 to identify the segments 114-117 of the media content 112, the user 140 may view selected segments 114-117 of the media content summary 156, having a particular content type 148, within a time period indicated by the time constraint 146. The user 140 can view the media content summary 156 that includes those segments of the media content 112 that other viewers annotated as having the user specified content type 148 without having to view the entire media content 112, thereby saving time.
Referring to
The set-top box device 202 is operable to receive a media content summary 256 of media content 212 from the media server 204 via the network 206 and to display the media content summary at the display device 254. The set-top box device 202 may enable a user 240 to interact with the media server 204. For example, the user 240 may use a remote control device 252 to send a user request 242 from the set-top box device 202 requesting that a media content analyzer 210 identify segments 214, 215, 216, and 217 of media content 212 for inclusion in the media content summary 256. The user 240 may control how the segments 214-217 are displayed at the display device 154 with the remote control device 252.
The media server 204 may include a memory 260, an input interface 246, an output interface 248, and a processor 250. The memory 260 may include the media content analyzer 210, a generator module 262, a selection module 264, viewer annotation data 208, and the media content 212. The generator module 262 is operable to access the viewer annotation data 208 and to generate the media content analyzer 210 based on the viewer annotation data 208, as is described in more detail below in the description of
In operation, different viewers (not shown) other than the user 240 may view the media content 212 and generate the viewer annotation data 208, as is described below in the description of
In response to the user 240 sending the user request 242 to the media server 204, the generator module 262 may generate the media content analyzer 210 based on the viewer annotation data 208. The media server 204 may identify the segments 214-217 of the media content 212 based on the user specified criteria 244. The media server 204 may select at least one of the segments 214-217 for inclusion in the media content summary 256 based on the user specified criteria 244. The media server 204 may send the media content summary 256 that includes the selected segment(s) 214-217 to the set-top box device 202 for display at the display device 254.
In a particular embodiment, the media content analyzer 210 identifies the segments 214-217 by identifying a starting position and an ending position of each of the segments 214-217. For example, the media content analyzer 210 may identify a first segment 214 by identifying a first starting position 218 and a first ending position 222. The media content analyzer 210 may identify a second segment 215 by identifying a second starting position 219 and a second ending position 223. The media content analyzer 210 may identify a third segment 216 by identifying a third starting position 220 and a third ending position 224. The media content analyzer 210 may identify a fourth segment 217 by identifying the fourth starting position 221 and the fourth ending position 225. In a particular embodiment, the media content analyzer 210 identifies the starting positions 218-221 and the ending positions 222-225 based on a type of each viewer annotation of the viewer annotated data 208 and a number of viewer annotations associated with each of the segments 214-217. For example, when the user specified criteria 244 includes a humorous content type, the media content analyzer 210 may identify the segments 214-217 that have viewer annotations indicating humorous content. The media content analyzer 210 may sort the identified segments 214-217 based on the number of viewer annotations associated with each of the segments 214-217 and select from among the sorted segments 214-217 for inclusion in the media content summary 256 based on the number of viewer annotations. For example, the media content analyzer 210 may select the second segment 215 and the fourth segment 217 for inclusion in the media content summary 256 because the segments 215, 217 have the most number of viewer annotations indicating humorous content and can be viewed within the time period 245.
In a particular embodiment, the media content analyzer 210 identifies the starting positions 218-221 and the ending positions 222-225 by detecting scene transitions of the media content 212. For example, the media content analyzer 210 may identify scene transitions by detecting a change in video imagery or audio content. The media content analyzer 210 may identify portions of the viewer annotation data 208 associated with each segment 214-217 and select one or more of the segments 214-217 to send to the set-top box device 202 for presentation at the display device 254 based on the viewer annotation data 208. In a particular embodiment, the media content analyzer 210 identifies the starting positions 218-221 and the ending positions 222-225 to enable the identified segments 214-217 to be viewable within the time period 245.
By using the media content analyzer 210 to select from among the segments 214-217 of the media content 212 for inclusion in the media content summary 256, the user 240 can view the media content summary 256 within a specified time period 245. The segments 214-217 are identified and selected for inclusion in the media content summary 256 by the media content analyzer 210 based on the viewer annotation data 208 and the user specified criteria 244. In this way, the user 240 can view those segments of the media content 212 that other viewers annotated and that meet the user specified criteria 244.
Referring to
In a particular embodiment, the first set-top box device 302 is operable to receive a media content summary 356 from the media server 304 and to display the media content summary 356 at the display device 350. A remote control device 352 associated with the first set-top box device 302 may be operable to enable a first viewer 340 to send a user request 342 to the media server 304 to receive the summary of the media content 356 of the media content 312 for display at the display device 350.
In a particular embodiment, the database 326 is operable to store viewer annotation data, such as a first set of viewer annotations 328, a second set of viewer annotations 330, and timestamps 332 associated with the viewer annotations. In a particular embodiment, annotations 380, 382, and 384 are stored as the first set of viewer annotations 328 along with the timestamps 332. The timestamps 332 may be associated with and identify when the first set of viewer annotations 328 were created with reference to the media content 312. The first set of viewer annotations 328 may be generated by viewers having at least one characteristic common to the first viewer 340 that sends the user request 342. For example, the first viewer 340 may share an interest in a particular type of sport or enjoy similar media content genres with viewers that generated the viewer annotations 328. The second set of viewer annotations 330 may be received from viewers with a particular expertise associated with a genre of the media content 312. For example, the second set of viewer annotations 330 may include annotations from movie reviewers, movie critics, other experts, or any combination thereof.
The media server 304 may include a memory 320. The memory 320 may include media content analysis information 308, a media content analyzer 310, and media content 312. In a particular embodiment, the media server 304 is operable to generate the media content analyzer 310 in response to receiving the user request 342 to generate the media content analyzer 310. The first set of viewer annotations 328 and the second set of viewer annotations 330 may be associated with the media content 312. The media content analysis information 308 may include one or more metrics, such as a first response metric 322, a second response metric 323, a third response metric 324, and a fourth response metric 325. The response metrics 322-325 may be numerical representations of the first set of viewer annotations 328. For example, the response metrics 322-325 may identify that a portion of the media content 312 has twenty-two viewer annotations indicating a humorous type of content and twelve viewer annotations indicating an action type of content. At least one of the response metrics 322-325 may be determined based on analyzing the first set of viewer annotations 328. The media content analysis information 308 may be generated based on the second set of viewer annotations 330. The media content analyzer 310 may be adapted to approximate the media content analysis information 308 based on at least one of the response metrics 322-325. For example, the media content analyzer 310 may apply numerical weights to the response metrics 322-325 to approximate the media content analysis information 308.
A second viewer 370 may be associated with the second set-top box device 360, a third viewer 372 may be associated with the third set-top box device 362, and a fourth viewer 374 may be associated with the fourth set-top box device 364. The viewers 370, 372, and 374 may generate the annotations 380, 382, and 384 while viewing the media content 312 during different time periods or concurrently. The annotations 380, 382, and 384 may include a facial expression, a hand gesture, a text input, and an emoticon selection, avatar selection, other viewer annotation, or any combination thereof. In a particular embodiment, the annotations 380, 382, and 384 are stored as metadata 318 associated with the media content 312. In another particular embodiment, the annotations 380, 382, and 384 are stored as the first set of viewer annotations 328 at the database 326. The timestamps 332 may be stored at the database 326 with the first set of viewer annotations 328. Each timestamp of the timestamps 332 may identify when the associate viewer annotation was created with reference to the media content 312.
In operation, the first viewer 340 may send a user request 342 including user specified criteria 344 to the media server 304 via the network 306. For example, the first viewer 340 may use the remote control device 352 to instruct the first set-top box device 302 to send the user request 342 to the media server 304. In response to receiving the user request 342, the media content analyzer 310 may be generated by analyzing the first set of viewer annotations 328 associated with the media content 312 to determine at least one viewer response metric associated with the media content, such as the response metric 322-325. For example, the media content analyzer 310 may be generated by a generator module, such as the generator module 262 of
The media content analyzer 310 may be stored at the memory 320 of the media server 304. The media content analyzer 310 may identify segments 314, 315, 316, and 317 of the media content 312 and generate genre specific response values associated with each of the segments 314-317. The segments 314-317 may be portions of the media content 312 that have associated viewer annotations. The segments 314-317 may be identified by the media content analyzer 310 when creating a summary of the media content 312 for display at the display device. For example, the media content analyzer 310 may identify the segment 314 as having seven viewer annotations indicating a humorous genre and identify the segment 315 as having twelve viewer annotations indicating an action genre. The media content analyzer 310 may identify the segments 314-317 of the media content 312 by identifying portions of the media content 312 that have associated response metrics 322-325 and may change a size of the portions until the size satisfies the user specified criteria 344. For example, when the user specified criteria indicates that a summary viewable in fifteen minutes is to be generated, the media content analyzer 310 may identify the segments of the media content 312 that have the most number of viewer annotations corresponding to the user specified criteria 344, and adjust the starting and ending positions of each of the segments 314-317 until the total viewing time of the segments 314-317 is less than or equal to fifteen minutes. The media content analyzer 310 may generate the response metrics 322-325 associated with each of the segments 314-317 based on the first set of viewer annotations 328 associated with each of the segments 314-317. The media content analyzer 310 may apply a numerical weight to each of the response metrics 322-325 to generate a second numerical value associated with the segments 314-317 and select the identified segments 314-317 based on the second numerical value.
The first viewer 340 may use the media content analyzer 310 to identify and view the media content summary 356 that includes selected portions of the media content 312. To illustrate, the first viewer 340 may use the media content analyzer 310 to identify and view the media content summary 256 that includes portions of the media content 312 that contain humorous segments of a humorous program or that contain action segments of an action program. The media content analyzer 310 may identify segments of the media content 312 based on the first set of viewer annotations 328 using the response metrics 322-325 to approximate the analysis of the second set of viewer annotations 330.
Referring to
A request from a user to create a media content summary may be received, at 402. A media content analyzer may be accessed, at 404. The media content analyzer may be generated based on viewer annotation data associated with media content. The viewer annotation data may be received from viewers other than the user. Moving to 406, segments of the media content may be identified using the media content analyzer. Proceeding to 408, the identified segments of the media content may be sorted based on user specified criteria. Advancing to 410, at least one segment of the identified segments of the media content may be selected for inclusion in the media content summary. Continuing to 412, the media content summary may be sent to a display device. Proceeding to 414, input may be received indicating a skip function after sending at least a portion of the at least one selected segment to the display device. Continuing to 416, a next selected segment of the media content summary may be sent to the display device in response to the skip function. The method ends at 418.
By using a media content analyzer to identify and select segments of media content for inclusion in a media content summary, a viewer may view segments of the media content that other viewers annotated as matching a viewer specified criteria and may skip to a next segment of interest. For example, a viewer who did not view one or more episodes of a particular television series may generate and view a summary of the missed episodes. The viewer can use the skip function to view only those segments of the media content summary which are of interest to the viewer. For example, when a viewer is interested in determining what happened to a particular character of a program, the viewer can skip segments that are not related to that particular character.
Referring to
In a particular embodiment, user annotation data associated with media content is received, at 502. In a particular embodiment, the viewer annotations may be stored as metadata associated with the media content, at 504. In another particular embodiment, the viewer annotations and a timestamp associated with each viewer annotation may be stored at a database, at 506. The timestamp may identify when the associated viewer annotation was created with reference to the media content. Moving to 508, a user request to select segments of media content based on user specified criteria may be received. Proceeding to 510, segments of the media content may be identified using a media content analyzer. Advancing to 512, a size of each of the identified segments of the media content may be determined by identifying a starting position and an ending position of each of the identified segments. Continuing to 514, at least one segment of the identified segments of the media content may be selected for inclusion in a media content summary based on the user specified criteria. Proceeding to 516, the at least one selected segment of the media content may be sent for presentation at a display device. The method then ends at 518.
By using a media content analyzer to identify segments of media content, a viewer may view a media content summary that includes segments of the media content that other viewers annotated as matching a user specified criteria within a user specified time period. For example, a viewer who was unable to view a sporting event may generate and view a summary of the sporting event within a user specified time period.
Referring to
A request to generate a media content analyzer may be received from a viewer, at 602. Advancing to 604, a first set of viewer annotations may be selected based on at least one characteristic common to viewers that generated the first set of viewer annotations and the viewer that sent the request. For example, the first set of viewer annotations may be selected because the viewers that generated the first set of viewer annotations and the viewer that sent the request are in a particular age bracket, live in a particular zip code, share a particular hobby, are members of a particular group, have other demographic similarities, or any combination thereof. The viewer annotations may include a facial expression, a hand gesture, a text input, an emoticon selection, other type of viewer annotation, or any combination thereof.
Moving to 606, the first set of viewer annotations associated with the media content may be analyzed to determine at least one viewer response metric associated with the media content. For example, the first set of viewer annotations may be analyzed to identify which portions of the media content have associated viewer annotations, a type of each viewer annotation (e.g. humor, scoring event, plot turning point, etc.), and a number of each viewer annotation type. To illustrate, the viewer annotations may be analyzed to determine that at five minutes and thirty-two seconds from the start of the media content, twenty out of twenty-five viewers annotated a scene of the media content as having a humorous content type. A viewer response metric may be generated to indicate that eighty percent of viewers annotated the portion of the media content at five minutes and thirty-two seconds as having a humorous content type.
Continuing to 608, media content analysis information may be generated based on a second set of viewer annotations associated with the media content. The second set of viewer annotations may be received from viewers with a particular expertise associated with a genre of the media content. For example, the second set of viewer annotations associated with a movie may be received from movie critics. Proceeding to 610, the media content analysis information may be accessed. Advancing to 612, a media content analyzer adapted to approximate the media content analysis information may be generated based on the at least one viewer response metric. For example, the media content analyzer may perform an analysis, to identify numerical weights that may be applied to the first set of viewer annotations to approximate the second set of viewer annotations. To illustrate, numerical weights may be generated for the first set of viewer annotations of each segment to approximate the annotations of the movie critics. Moving to 614, the media content analyzer may be stored in a memory. The method ends at 616.
By analyzing annotation data of experts, such as media critics/reviewers, a media content analyzer capable of approximating the annotation data of the experts may be based on viewer annotations. In this way, viewer annotations of media content may be used to identify segments of media content that experts may identify as being segments worthy of inclusion in a summary of the media content.
Referring to
In a particular embodiment, portions of media content having viewer annotations associated with the portions are identified, at 702. Moving to 704, a size of each portion may be changed until the size satisfies the user specified criteria. For example, the size of each portion may be changed to enable the portions to be viewed within a user specified period of time. In another example, the size of each portion may be changed to optimize the portions to match annotations of viewers that are considered experts in the genre. Advancing to 706, segments of the media content may be identified using a media content analyzer. The media content analyzer may be generated based on viewer annotation data received from viewers other than the user. For example, the segments of the media content may be identified based on the portions of the media content. Proceeding to 708, genre-specific response values associated with each of the segments may be generated. For example, when a particular segment contains action content that some viewers find humorous, the particular segment may be identified as having eleven viewer annotations indicating a humorous content type and fourteen viewer annotations indicating an action content type. Advancing to 710, a numerical value associated with each identified segment may be generated based on the first set of viewer annotations. For example, the numerical value may be a total number of annotations, regardless of content type, for each segment. The numerical value may be a total number of annotations associated with a particular type of content for each segment. The numerical value may be expressed as percentage of viewers that annotated each segment. Moving to 712, a weight may be applied to each numerical value to generate a second numerical value associated with each identified segment. For example, a numerical weight may be applied to each numerical value to approximate annotations of viewers having an expertise in a genre associated with the media content. To illustrate, numerical values derived from viewer annotations may be weighted to approximate expert annotations. Proceeding to 714, the identified segments may be selected for inclusion in a media content summary based on the second numerical value. The method ends at 716.
By using a media content analyzer to identify segments of media content, a viewer may view a media content summary that includes segments of the media content that other viewers annotated as matching a viewer specified criteria, such as a time period or a genre. For example, when a viewer is unable to view a martial arts movie, the viewer may generate and view a summary of segments of the media content that are of interest to the viewer. To illustrate, the viewer may specify that the viewer only wishes to views segments of the media content that include leaping kicks but do not include throws.
Referring to
In a networked deployment, the computer system may operate in the capacity of a set-top box device or media server, such as the set-top box device 102 or the media server 104 of
As illustrated in
In a particular embodiment, as depicted in
In an alternative embodiment, dedicated hardware implementations, such as application specific integrated circuits, programmable logic arrays and other hardware devices, can be constructed to implement one or more of the methods described herein. Applications that may include the apparatus and systems of various embodiments can broadly include a variety of electronic and computer systems. One or more embodiments described herein may implement functions using two or more specific interconnected hardware modules or devices with related control and data signals that can be communicated between and through the modules, or as portions of an application-specific integrated circuit. Accordingly, the present system encompasses software, firmware, and hardware implementations.
In accordance with various embodiments of the present disclosure, the methods described herein may be implemented by software programs executable by a computer system. Further, in an exemplary, non-limited embodiment, implementations can include distributed processing, component/object distributed processing, and parallel processing. Alternatively, virtual computer system processing can be constructed to implement one or more of the methods or functionality as described herein.
The present disclosure contemplates a computer-readable medium that includes instructions 824 or receives and executes instructions 824 responsive to a propagated signal, so that a device connected to a network 826 can communicate voice, video or data over the network 826. Further, the instructions 824 may be transmitted or received over the network 826 via the network interface device 820.
While the computer-readable medium is shown to be a single medium, the term “computer-readable medium” includes a single medium or multiple media, such as a centralized or distributed database, and/or associated caches and servers that store one or more sets of instructions. The term “computer-readable medium” shall also include any medium that is capable of storing, or encoding a set of instructions for execution by a processor or that cause a computer system to perform any one or more of the methods or operations disclosed herein.
In a particular non-limiting, exemplary embodiment, the computer-readable medium can include a solid-state memory such as a memory card or other package that houses one or more non-volatile read-only memories. Further, the computer-readable medium can be a random access memory or other volatile re-writable memory. Additionally, the computer-readable medium can include a magneto-optical or optical medium, such as a disk or tapes or other storage device to capture carrier wave signals such as a signal communicated over a transmission medium. A digital file attachment to an email or other self-contained information archive or set of archives may be considered equivalent to a tangible storage medium. Accordingly, the disclosure is considered to include any one or more of a computer-readable storage medium and other equivalents and successor media, in which data or instructions may be stored.
It should also be noted that software that implements the disclosed methods may optionally be stored on a tangible storage medium, such as: a magnetic medium, such as a disk or tape; a magneto-optical or optical medium, such as a disk; or a solid state medium, such as a memory card or other package that houses one or more read-only (non-volatile) memories, random access memories, or other re-writable (volatile) memories.
Although the present specification describes components and functions that may be implemented in particular embodiments with reference to particular standards and protocols, the disclosure is not limited to such standards and protocols. For example, standards for Internet, other packet switched network transmission (e.g. TCP/IP, UDP/IP, HTML, and HTTP) and standards for viewing media content (e.g. MPEG, SMPTE, and H.264) represent examples of the state of the art. Such standards are periodically superseded by faster or more efficient equivalents having essentially the same functions. Accordingly, replacement standards and protocols having the same or similar functions as those disclosed herein are considered equivalents thereof.
The illustrations of the embodiments described herein are intended to provide a general understanding of the structure of the various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. Accordingly, the disclosure and the figures are to be regarded as illustrative rather than restrictive.
One or more embodiments of the disclosure may be referred to herein, individually and/or collectively, by the term “invention” merely for convenience and without intending to voluntarily limit the scope of this application to any particular invention or inventive concept. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown. This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the above embodiments, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description.
The Abstract of the Disclosure is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. In addition, in the foregoing Detailed Description, various features may be grouped together or described in a single embodiment for the purpose of streamlining the disclosure. This disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter may be directed to less than all of the features of any of the disclosed embodiments. Thus, the following claims are incorporated into the Detailed Description, with each claim standing on its own as defining separately claimed subject matter.
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments, which fall within the scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
The present application claims priority from and is a continuation of patent application Ser. No. 12/414,786, filed on Mar. 31, 2009 and entitled “SYSTEM AND METHOD TO CREATE A MEDIA CONTENT SUMMARY BASED ON VIEWER ANNOTATIONS,” the contents of which are incorporated herein by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
1773980 | Farnsworth | Aug 1930 | A |
3567914 | Neese et al. | Mar 1971 | A |
3729581 | Anderson | Apr 1973 | A |
4337480 | Bourassin et al. | Jun 1982 | A |
4937784 | Masai et al. | Jun 1990 | A |
5050071 | Harris et al. | Sep 1991 | A |
5528281 | Grady et al. | Jun 1996 | A |
5802361 | Wang et al. | Sep 1998 | A |
5835667 | Wactlar et al. | Nov 1998 | A |
5963670 | Lipson et al. | Oct 1999 | A |
5991365 | Pizano et al. | Nov 1999 | A |
5991373 | Pattison et al. | Nov 1999 | A |
6173317 | Chaddha et al. | Jan 2001 | B1 |
6236395 | Sezan | May 2001 | B1 |
6298482 | Seidman et al. | Oct 2001 | B1 |
6353824 | Boguraev et al. | Mar 2002 | B1 |
6370543 | Hoffert et al. | Apr 2002 | B2 |
6449653 | Klemets | Sep 2002 | B2 |
6496981 | Wistendahl et al. | Dec 2002 | B1 |
6557042 | He | Apr 2003 | B1 |
6560281 | Black et al. | May 2003 | B1 |
6751776 | Gong | Jun 2004 | B1 |
6965646 | Firestone | Nov 2005 | B1 |
7111009 | Gupta | Sep 2006 | B1 |
7191462 | Roman et al. | Mar 2007 | B1 |
7334186 | Wu | Feb 2008 | B2 |
7446803 | Leow | Nov 2008 | B2 |
7631327 | Dempski et al. | Dec 2009 | B2 |
7647555 | Wilcox | Jan 2010 | B1 |
7657060 | Cohen | Feb 2010 | B2 |
7684347 | Merkey | Mar 2010 | B2 |
7720851 | Chang et al. | May 2010 | B2 |
7774817 | Gupta | Aug 2010 | B2 |
7859539 | Beckman | Dec 2010 | B2 |
7860854 | Lipson et al. | Dec 2010 | B2 |
7895625 | Bryan et al. | Feb 2011 | B1 |
7908244 | Royo | Mar 2011 | B2 |
7975215 | Duncan | Jul 2011 | B2 |
8005841 | Walsh | Aug 2011 | B1 |
8065302 | Sridhar | Nov 2011 | B2 |
8065710 | Malik | Nov 2011 | B2 |
8099315 | Amento et al. | Jan 2012 | B2 |
8112702 | Badoiu | Feb 2012 | B2 |
8117210 | Cerosaletti et al. | Feb 2012 | B2 |
8364020 | Lui | Jan 2013 | B2 |
8516374 | Fleischman | Aug 2013 | B2 |
8548918 | Amidon | Oct 2013 | B1 |
9467728 | Ellis | Oct 2016 | B2 |
20010014971 | Nagashima et al. | Aug 2001 | A1 |
20020059584 | Ferman | May 2002 | A1 |
20020069218 | Sull | Jun 2002 | A1 |
20020108112 | Wallace | Aug 2002 | A1 |
20020129371 | Emura et al. | Sep 2002 | A1 |
20020188949 | Wang et al. | Dec 2002 | A1 |
20020199194 | Ali | Dec 2002 | A1 |
20030066068 | Gutta et al. | Apr 2003 | A1 |
20030101104 | Dimitrova et al. | May 2003 | A1 |
20030177503 | Sull | Sep 2003 | A1 |
20030182620 | Errico et al. | Sep 2003 | A1 |
20030221198 | Sloo | Nov 2003 | A1 |
20040068758 | Daily et al. | Apr 2004 | A1 |
20040237032 | Miele | Nov 2004 | A1 |
20040268398 | Fano et al. | Dec 2004 | A1 |
20050004930 | Hatta | Jan 2005 | A1 |
20050071865 | Martins | Mar 2005 | A1 |
20050114908 | Li | May 2005 | A1 |
20050120368 | Goronzy | Jun 2005 | A1 |
20050160457 | Rui | Jul 2005 | A1 |
20060101499 | Aravamudan et al. | May 2006 | A1 |
20060170760 | Anderson, Jr. | Aug 2006 | A1 |
20060212794 | Evans | Sep 2006 | A1 |
20070006262 | Cleron et al. | Jan 2007 | A1 |
20070027839 | Ives | Feb 2007 | A1 |
20070157249 | Cordray et al. | Jul 2007 | A1 |
20070208766 | Malik | Sep 2007 | A1 |
20070250901 | McIntire | Oct 2007 | A1 |
20080028323 | Rosen | Jan 2008 | A1 |
20080059872 | Tseng et al. | Mar 2008 | A1 |
20080126191 | Schiavi | May 2008 | A1 |
20080133736 | Wensley | Jun 2008 | A1 |
20080140385 | Mahajan | Jun 2008 | A1 |
20080313541 | Shafton | Dec 2008 | A1 |
20080313570 | Shamma | Dec 2008 | A1 |
20090006082 | Harris | Jan 2009 | A1 |
20090077658 | King | Mar 2009 | A1 |
20090144608 | Oisel et al. | Jun 2009 | A1 |
20090210779 | Badoiu et al. | Aug 2009 | A1 |
20100005485 | Tian | Jan 2010 | A1 |
20100008643 | Rakib | Jan 2010 | A1 |
20100070527 | Chen | Mar 2010 | A1 |
20100070987 | Amento et al. | Mar 2010 | A1 |
20100169906 | Takahashi | Jul 2010 | A1 |
20100198835 | Sorvari et al. | Aug 2010 | A1 |
20100211432 | Yiu | Aug 2010 | A1 |
Number | Date | Country | |
---|---|---|---|
20140325546 A1 | Oct 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12414786 | Mar 2009 | US |
Child | 14284991 | US |