While consuming content, a user may wish to access additional related content. This may be motivated by a desire to learn more about the subject of the content or about something mentioned therein, for example.
In the context of written articles, this can be addressed using hypertext. When reading an article on the internet, the article may contain hyperlinks that represent avenues for the access of additional content. Clicking on a word or phrase may lead to a definition of the word, or to another article about the subject for example. Another web page may be used to present this additional information.
When viewing a video, however, the mechanisms available to the user for the access of related content are generally more limited. In the context of a web page containing a video, hyperlinks may be present elsewhere on the page, such that the user may click on these to access related content. But in other cases, such as when a user is viewing video that has been streamed or downloaded, the user currently has no convenient way to get supplemental content, such as text commentary, or related video or audio.
In the drawings, the leftmost digit(s) of a reference number identifies the drawing in which the reference number first appears.
An embodiment is now described with reference to the figures, where like reference numbers indicate identical or functionally similar elements. While specific configurations and arrangements are discussed, it should be understood that this is done for illustrative purposes only. A person skilled in the relevant art will recognize that other configurations and arrangements can be used without departing from the spirit and scope of the description. It will be apparent to a person skilled in the relevant art that this can also be employed in a variety of other systems and applications other than what is described herein.
Disclosed herein are methods and systems to provide supplemental content to a user who is viewing video or other content. In an embodiment, a user may wish to watch a video. The user's device (through which he will access the video) then provides an identifier of that video to a server or other computing facility. At this facility, the video identifier is used to identify supplemental content that corresponds to the user's video. The supplemental content is then provided to the user device for the user's consumption. In an alternative embodiment, the supplemental content may be multiplexed with the video at the computing facility, such that the video and the supplemental content are provided together to the user device. The supplemental content may be structured in such a way that pieces of the supplemental content are accessible at particular points in the video. The piece(s) of the supplemental content available at a particular point in the video will be related to one or more objects that are present at this point. This allows a user to access one or more pieces of supplemental content in a context-specific manner, at a point in the video where the piece(s) of supplemental content are relevant.
An embodiment of the system described herein is illustrated in
The user device sends an identifier of the video to the player layer services 120. The video identifier may take the form of a signature unique to the video (as shown here), but more generally may be any unambiguous identifier of the video. Another example of such an identifier would be a title identification (ID). In the illustrated embodiment, the video identifier may take the form of an argument in a request 150 (GetLayers) seeking layer information. In response, the layer information specific to the video is provided to the user device 110 in message 160.
The processing described above is shown in
In response, at 250 the layer service retrieves the layer information related to the identified video. In embodiments, the layer information may be stored in a database or in some other organized fashion that allows ready access. At 260, the layer service sends the retrieved layer information to the user device. At 270, the user device receives the layer information from the layer service. At 280, the user device makes the layer service available to the user. Examples of interfaces through which the user may access the layer information will be described in greater detail below.
Note that in the above process, the receipt of the video at the user device and the receipt of the layer information are separate processes. In an alternative embodiment, these two operations may not be separate. For example, the video and the related layer information may be received simultaneously. The video and the layer information may be multiplexed together or otherwise combined, for example, and delivered together. This might be desirable if the content were to be portable. In such a case, the layer information may be multiplexed with the video when the latter is initially delivered. The format of a portable container (e.g., MP4 or MKV) could be extended so that layer information could be kept with the video. The layer information would be held in the container such that the entire container file could be moved from device to device for playback without having to access the layer service.
The video and layer information may also be combined when the content is streamed. Here, the video and layer information may be delivered from a content delivery network (CDN). Alternatively, they may be delivered separately, such that the video is sent from a CDN and the layer information is sent from a layer service.
The accessing of layer information at the user device is illustrated in
If a car is shown, for instance, the user may click on the car to get more information about it. Such information would be part of the layer information, and may concern the make and model, the history of such vehicles, or may be an advertisement for the car for example and without limitation. In a film, an actor may be the object of interest, and clicking on the actor may result in learning more about the actor, such as the names of other films featuring him. In a sporting event, clicking on an object may yield information related to the object or the activity surrounding it. In a pre-recorded hockey game for example, clicking on a net during a scoring play may yield information about the play. The layer information may also include supplemental video; in the hockey example, clicking on the net may allow the user access to other video of the goal, taken from a different angle for example. Alternatively, the layer information may be in text or audio form, or may be a hyperlink through which the user may access additional information.
The construction of layering information related to a video is illustrated in
Note that a particular object may be located in different locations in different frames. To address this, one or more object tracking algorithms known to those of ordinary skill in the art may be used. Once an object's coordinates are determined in a particular frame, its coordinates in subsequent or previous frames may be generated using such an algorithm. This would allow for the determination of an object's position across a sequence frames. These coordinates would also be recorded at 440.
At 450, the coordinates of the object's position is entered into the layer information. At 460, supplemental content related to the object at this position is associated with the position. As noted above, this supplemental content may be text, video, or audio, or may be a hyperlink to such information. In an embodiment, the supplemental content may include commentary from the content producer, or may represent additional information from the producer intended as part of the artistic expression. The supplemental content may also originate from previous viewers of the video, and may include textual comments or the number of likes and/or dislikes registered by these viewers.
At 470, this supplemental content and its association with the coordinates (i.e., the mapping between the object's position and the supplemental content) are added to the layer information. In an embodiment, a descriptor of the object may also be entered and mapped to the position. The descriptor may be a text label, for example, such as “Ferrari” or “Fred Astaire.”
The layer information may be organized in any manner known to persons of ordinary skill in the art. For example, items of supplemental content may be indexed by sets of coordinates, where each set of coordinates is associated with an object at these coordinates in the video. This allows for the association of the coordinates of an object with supplemental content, and implements the mapping of supplemental content to coordinates.
In an embodiment, the user may also contribute to or update the layer information. The user may have commentary, other content, or related links to offer other viewers. A process for such input is illustrated in
At 540, the position (t, x, y) would be added to the layer information of the video, and at 550 the comment or other user input would be mapped or otherwise associated with the position. At 560, the mapping (i.e., the position, the user input, and the association between the two) would be incorporated into the layer information, thereby adding to or updating the layer information. In an embodiment, the user may also provide a descriptor of the object, such as a word or phrase of text; in such a case, the descriptor would also be added to the layer information.
In an embodiment, the system described herein may provide a number of interfaces for the user. These would allow him to take advantage of the layer information and go directly to point of interest in the video, for example. One possible interface is illustrated in
In various embodiments, other user interface designs may be used to give information to the user regarding the content of the video. In the example of
The embodiment of
One or more features disclosed herein may be implemented in hardware, software, firmware, and combinations thereof, including discrete and integrated circuit logic, application specific integrated circuit (ASIC) logic, and microcontrollers, and may be implemented as part of a domain-specific integrated circuit package, or a combination of integrated circuit packages. The term software, as used herein, refers to a computer program product including at least one computer readable medium having computer program logic stored therein to cause a computer system to perform one or more features and/or combinations of features disclosed herein. The computer readable medium may be transitory or non-transitory. An example of a transitory computer readable medium may be a digital signal transmitted over a radio frequency or over an electrical conductor, through a local or wide area network, or through a network such as the Internet. An example of a non-transitory computer readable medium may be a compact disk, a flash memory, or other data storage device.
In an embodiment, some or all of the processing described herein may be implemented as software or firmware. Such a software or firmware embodiment of layer service functionality is illustrated in the context of a computing system 900 in
In the embodiment of
A software or firmware embodiment of functionality at the user device is illustrated in the context of a computing system 1000 in
In the embodiment of
Methods and systems are disclosed herein with the aid of functional building blocks illustrating the functions, features, and relationships thereof. At least some of the boundaries of these functional building blocks have been arbitrarily defined herein for the convenience of the description. Alternate boundaries may be defined so long as the specified functions and relationships thereof are appropriately performed.
While various embodiments are disclosed herein, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail may be made therein without departing from the spirit and scope of the methods and systems disclosed herein. Thus, the breadth and scope of the claims should not be limited by any of the exemplary embodiments disclosed herein.
Number | Name | Date | Kind |
---|---|---|---|
5361332 | Yoshida et al. | Nov 1994 | A |
5404436 | Hamilton | Apr 1995 | A |
6031622 | Ristow et al. | Feb 2000 | A |
6195388 | Choi et al. | Feb 2001 | B1 |
6658056 | Duruöz et al. | Dec 2003 | B1 |
6807306 | Girgensohn et al. | Oct 2004 | B1 |
6859496 | Boroczky et al. | Feb 2005 | B1 |
6956901 | Boroczky et al. | Oct 2005 | B2 |
7242772 | Tehranchi | Jul 2007 | B1 |
7478325 | Foehr | Jan 2009 | B2 |
8023562 | Zheludkov et al. | Sep 2011 | B2 |
8046453 | Olaiya | Oct 2011 | B2 |
8054880 | Yu et al. | Nov 2011 | B2 |
8225061 | Greenebaum | Jul 2012 | B2 |
8233768 | Soroushian et al. | Jul 2012 | B2 |
8249168 | Graves | Aug 2012 | B2 |
8270473 | Chen et al. | Sep 2012 | B2 |
8270819 | Vannier | Sep 2012 | B2 |
8289338 | Priyadarshi et al. | Oct 2012 | B2 |
8311115 | Gu et al. | Nov 2012 | B2 |
8321556 | Chatterjee et al. | Nov 2012 | B1 |
8386621 | Park | Feb 2013 | B2 |
8412841 | Swaminathan et al. | Apr 2013 | B1 |
8456380 | Pagan | Jun 2013 | B2 |
8472792 | Butt | Jun 2013 | B2 |
8914534 | Braness et al. | Dec 2014 | B2 |
20020120934 | Abrahams | Aug 2002 | A1 |
20030002578 | Tsukagoshi et al. | Jan 2003 | A1 |
20030061369 | Aksu et al. | Mar 2003 | A1 |
20030152370 | Otomo et al. | Aug 2003 | A1 |
20030231863 | Eerenberg et al. | Dec 2003 | A1 |
20030231867 | Gates et al. | Dec 2003 | A1 |
20030236836 | Borthwick | Dec 2003 | A1 |
20030236907 | Stewart et al. | Dec 2003 | A1 |
20040136698 | Mock | Jul 2004 | A1 |
20050038826 | Bae et al. | Feb 2005 | A1 |
20050193070 | Brown et al. | Sep 2005 | A1 |
20050193322 | Lamkin et al. | Sep 2005 | A1 |
20050204289 | Mohammed et al. | Sep 2005 | A1 |
20050207442 | Zoest et al. | Sep 2005 | A1 |
20050207578 | Matsuyama et al. | Sep 2005 | A1 |
20050273695 | Schnurr | Dec 2005 | A1 |
20050275656 | Corbin et al. | Dec 2005 | A1 |
20060026294 | Virdi et al. | Feb 2006 | A1 |
20060078301 | Ikeda et al. | Apr 2006 | A1 |
20060129909 | Butt et al. | Jun 2006 | A1 |
20060173887 | Breitfeld et al. | Aug 2006 | A1 |
20060245727 | Nakano et al. | Nov 2006 | A1 |
20060259588 | Lerman et al. | Nov 2006 | A1 |
20060263056 | Lin et al. | Nov 2006 | A1 |
20070031110 | Rijckaert | Feb 2007 | A1 |
20070047901 | Ando et al. | Mar 2007 | A1 |
20070083617 | Chakrabarti et al. | Apr 2007 | A1 |
20070086528 | Mauchly et al. | Apr 2007 | A1 |
20070140647 | Kusunoki et al. | Jun 2007 | A1 |
20070154165 | Hemmeryckx-Deleersnijder et al. | Jul 2007 | A1 |
20070168541 | Gupta et al. | Jul 2007 | A1 |
20070180125 | Knowles et al. | Aug 2007 | A1 |
20070239839 | Buday et al. | Oct 2007 | A1 |
20070292107 | Yahata et al. | Dec 2007 | A1 |
20080101466 | Swenson et al. | May 2008 | A1 |
20080126248 | Lee et al. | May 2008 | A1 |
20080137736 | Richardson et al. | Jun 2008 | A1 |
20080192818 | DiPietro et al. | Aug 2008 | A1 |
20080256105 | Nogawa et al. | Oct 2008 | A1 |
20080263354 | Beuque | Oct 2008 | A1 |
20080279535 | Haque et al. | Nov 2008 | A1 |
20080310496 | Fang | Dec 2008 | A1 |
20090031220 | Tranchant et al. | Jan 2009 | A1 |
20090037959 | Suh et al. | Feb 2009 | A1 |
20090060452 | Chaudhri | Mar 2009 | A1 |
20090066839 | Jung et al. | Mar 2009 | A1 |
20090132599 | Soroushian et al. | May 2009 | A1 |
20090132721 | Soroushian et al. | May 2009 | A1 |
20090150557 | Wormley et al. | Jun 2009 | A1 |
20090169181 | Priyadarshi et al. | Jul 2009 | A1 |
20090201988 | Gazier et al. | Aug 2009 | A1 |
20090217317 | White | Aug 2009 | A1 |
20090226148 | Nesvadba et al. | Sep 2009 | A1 |
20090293116 | DeMello et al. | Nov 2009 | A1 |
20090303241 | Priyadarshi et al. | Dec 2009 | A1 |
20090307258 | Priyadarshi et al. | Dec 2009 | A1 |
20090307267 | Chen et al. | Dec 2009 | A1 |
20090313544 | Wood et al. | Dec 2009 | A1 |
20090313564 | Rottler et al. | Dec 2009 | A1 |
20090328124 | Khouzam et al. | Dec 2009 | A1 |
20100040351 | Toma et al. | Feb 2010 | A1 |
20100094969 | Zuckerman et al. | Apr 2010 | A1 |
20100095121 | Shetty et al. | Apr 2010 | A1 |
20100111192 | Graves | May 2010 | A1 |
20100158109 | Dahlby et al. | Jun 2010 | A1 |
20100189183 | Gu et al. | Jul 2010 | A1 |
20100228795 | Hahn | Sep 2010 | A1 |
20100235472 | Sood et al. | Sep 2010 | A1 |
20110067057 | Karaoguz et al. | Mar 2011 | A1 |
20110080940 | Bocharov | Apr 2011 | A1 |
20110126191 | Hughes et al. | May 2011 | A1 |
20110129011 | Cilli et al. | Jun 2011 | A1 |
20110142415 | Rhyu | Jun 2011 | A1 |
20110150100 | Abadir | Jun 2011 | A1 |
20110153785 | Minborg et al. | Jun 2011 | A1 |
20110239078 | Luby et al. | Sep 2011 | A1 |
20110246659 | Bouazizi | Oct 2011 | A1 |
20110268178 | Park | Nov 2011 | A1 |
20110302319 | Ha et al. | Dec 2011 | A1 |
20110305273 | He et al. | Dec 2011 | A1 |
20110314176 | Frojdh et al. | Dec 2011 | A1 |
20110314500 | Gordon et al. | Dec 2011 | A1 |
20120023251 | Pyle et al. | Jan 2012 | A1 |
20120093214 | Urbach | Apr 2012 | A1 |
20120170642 | Braness et al. | Jul 2012 | A1 |
20120170643 | Soroushian et al. | Jul 2012 | A1 |
20120170906 | Soroushian et al. | Jul 2012 | A1 |
20120170915 | Braness et al. | Jul 2012 | A1 |
20120173751 | Braness et al. | Jul 2012 | A1 |
20120278496 | Hsu | Nov 2012 | A1 |
20120307883 | Graves | Dec 2012 | A1 |
20130044821 | Braness et al. | Feb 2013 | A1 |
20130046902 | Villegas Nuñez et al. | Feb 2013 | A1 |
20130061045 | Kiefer et al. | Mar 2013 | A1 |
20150117837 | Amidei et al. | Apr 2015 | A1 |
Number | Date | Country |
---|---|---|
813167 | Dec 1997 | EP |
2004013823 | Jan 2004 | JP |
2004172830 | Jun 2004 | JP |
2007036666 | Feb 2007 | JP |
2007535881 | Dec 2007 | JP |
2014506430 | Mar 2014 | JP |
2004102571 | Nov 2004 | WO |
2009065137 | May 2009 | WO |
2010060106 | May 2010 | WO |
2010122447 | Oct 2010 | WO |
2010147878 | Dec 2010 | WO |
2012094171 | Jul 2012 | WO |
2012094181 | Jul 2012 | WO |
2012094189 | Jul 2012 | WO |
Entry |
---|
“Supported Media Formats”, Supported Media Formats, Android Developers, Nov. 27, 2013, 3 pgs. |
European Search Report for Application 11855237.1, search completed Jun. 12, 2014, 9 pgs. |
Federal Computer Week, “Tool Speeds Info to Vehicles”, Jul. 25, 1999, 5 pgs. |
HTTP Live Streaming Overview, Networking & Internet, Apple, Inc., Apr. 1, 2011, 38 pgs. |
Informationweek: Front End: Daily Dose, “Internet on Wheels”, Jul. 20, 1999, 3 pgs. |
International Search Report and Written Opinion for International Application PCT/US2011/066927, International Filing Date Dec. 22, 2011, Report Completed Apr. 3, 2012, Mailed Apr. 20, 2012, 14 pgs. |
International Search Report and Written Opinion for International Application PCT/US2011/067167, International Filing Date Dec. 23, 2011, Report Completed Jun. 19, 2012, Mailed Jul. 2, 2012, 11 pgs. |
ITS International, “Fleet System Opts for Mobile Server”, Aug. 26, 1999, 1 page. |
Microsoft, Microsoft Media Platform: Player Framework, “Silverlight Media Framework v1.1”, 2 pgs. |
Microsoft, Microsoft Media Platform:Player Framework, “Microsoft Media Platform: Player Framework v2.5 (formerly Silverlight Media Framework)”, 2 pgs. |
The Official Microsoft IIS Site, Smooth Streaming Client, 4 pgs. |
“Adaptive Streaming Comparison”, Jan. 28, 2010, 5 pgs. |
“Best Practices for Multi-Device Transcoding”, Kaltura Open Source Video, 13 pgs. |
“Netflix turns on subtitles for PC, Mac streaming”, 3 pgs. |
“Thread: SSME (Smooth Streaming Medial Element) config.xml review (Smooth Streaming Client configuration file)”, 3 pgs. |
“Transcoding Best Practices”, From movideo, Nov. 27, 2013, 5 pgs. |
“Using HTTP Live Streaming”, iOS Developer Library, Retrieved from: http://developer.apple.com/library/ios/#documentation/networkinginternet/conceptual/streamingmediaguide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html#//apple—ref/doc/uid/TP40008332-CH102-SW1, 10 pgs. |
U.S. Appl. No. 13/224,298, “Final Office Action Received”, May 19, 2014, 27 pgs. |
Akhshabi et al., “An Experimental Evaluation of Rate-Adaptation Algorithms in Adaptive Streaming over HTTP”, MMSys'11, Feb. 24-25, 2011, 12 pgs. |
Anonymous, “Method for the encoding of a compressed video sequence derived from the same video sequence compressed at a different bit rate without loss of data”, ip.com, ip.com No. IPCOM000008165D, May 22, 2002, pp. 1-9. |
Author Unknown, “Tunneling QuickTime RTSP and RTP over HTTP”, Published by Apple Computer, Inc.: 1999 (month unknown), 6 pgs. |
Deutscher, “IIS Transform Manager Beta—Using the MP4 to Smooth Task”, Retrieved from: https://web.archive.org/web/20130328111303/http://blog.johndeutscher.com/category/smooth-streaming, Blog post of Apr. 17, 2010, 14 pgs. |
Gannes, “The Lowdown on Apple's HTTP Adaptive Bitrate Streaming”, GigaOM, Jun. 10, 2009, 12 pgs. |
Ghosh, “Enhancing Silverlight Video Experiences with Contextual Data”, Retrieved from: http://msdn.microsoft.com/en-us/magazine/ee336025.aspx, 15 pgs. |
Inlet Technologies, “Adaptive Delivery to iDevices”, 2 pgs. |
Inlet Technologies, “Adaptive delivery to iPhone 3.0”, 2 pgs. |
Inlet Technologies, “HTTP versus RTMP”, 3 pgs. |
Inlet Technologies, “The World's First Live Smooth Streaming Event: The French Open”, 2 pgs. |
Kim, Kyuheon, “MPEG-2 ES/PES/TS/PSI”, Kyung-Hee University, Oct. 4, 2010, 66 pgs. |
Kurzke et al., “Get Your Content Onto Google TV”, Google, Retrieved from: http://commondatastorage.googleapis.com/io2012/presentations/live%20to%20website/1300.pdf, 58 pgs. |
Lang, “Expression Encoder, Best Practices for live smooth streaming broadcasting”, Microsoft Corporation, 20 pgs. |
Levkov, “Mobile Encoding Guidelines for Android Powered Devices”, Adobe Systems Inc., Addendum B, source and date unknown, 42 pgs. |
MSDN, “Adaptive streaming, Expression Studio 2.0”, 2 pgs. |
Nelson, “Smooth Streaming Deployment Guide”, Microsoft Expression Encoder, Aug. 2010, 66 pgs. |
Noe, A., “Matroska File Format (under construction!)”, Retrieved from the Internet: URL:http://web.archive.org web/20070821155146/www.matroska.org/technical/specs/matroska.pdf, retrieved on Jan. 19, 2011, Jun. 24, 2007, pp. 1-51. |
Ozer, “The 2012 Encoding and Transcoding Buyers' Guide”, Streamingmedia.com, Retrieved from: http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/The-2012-Encoding-and-Transcoding-Buyers-Guide-84210.aspx, 2012, 8 pgs. |
Pantos, “HTTP Live Streaming, draft-pantos-http-live-streaming-10”, IETF Tools, Oct. 15, 2012, Retrieved from: http://tools.ietf.org/html/draft-pantos-http-live-streaming-10, 37 pgs. |
Pantos, R., “HTTP Live Streaming: draft-pantos-http-live-streaming-06”, Published by the Internet Engineering Task Force (IETF), Mar. 31, 2011, 24 pgs. |
RGB Networks, “Comparing Adaptive HTTP Streaming Technologies”, Nov. 2011, Retrieved from: http://btreport.net/wp-content/uploads/2012/02/RGB-Adaptive-HTTP-Streaming-Comparison-1211-01.pdf, 20 pgs. |
Schulzrinne, H. et al., “Real Time Streaming Protocol 2.0 (RTSP): draft-ietfmmusic-rfc2326bis-27”, MMUSIC Working Group of the Internet Engineering Task Force (IETF), Mar. 9, 2011, 296 pgs. |
Siglin, “HTTP Streaming: What You Need to Know”, streamingmedia.com, 2010, 15 pgs. |
Siglin, “Unifying Global Video Strategies, MP4 File Fragmentation for Broadcast, Mobile and Web Delivery”, Nov. 16, 2011, 16 pgs. |
Wu, Feng et al., “Next Generation Mobile Multimedia Communications: Media Codec and Media Transport Perspectives”, In China Communications, Oct. 2006, pp. 30-44. |
Zambelli, Alex, “IIS Smooth Streaming Technical Overview”, Microsoft Corporation, Mar. 2009. |
Number | Date | Country | |
---|---|---|---|
20150117836 A1 | Apr 2015 | US |