The present invention relates generally to digital video distribution and playback systems and in particular to digital video distribution and playback systems providing enhanced playback control.
The digital video revolution is evolving from a physical-media distribution model to electronic-media distribution models that utilize Content Delivery Networks (CDNs) and Consumer Grade Networks (CGNs—such as residential Internet and in-home networks) for delivery of content to devices. The utilization of the Advanced Video Coding (AVC/H.264) standard is prevalent in today's optical and broadcast industries, but the adoption of this standard at bit-rates suitable for CDN/CGN distribution has not yet materialized in a unified and open specification for resolutions including full-HD (1080p) video.
Digital video formats however are typically designed to efficiently support playback of content. Other common user functions are typically supported through increased player complexity (and therefore cost) or the performance of the other functions is compromised, limiting the quality of the user-experience.
For example, visual-search through digitally encoded multimedia files is typically performed by displaying only the key-frames (aka intra-frames) of the relevant video stream. The key-frames are displayed for a time corresponding to the speed of the visual search being performed by the user and some may be skipped when a high search speed is requested. Alternate methods may decode all or parts of the video stream at higher rates and display selective frames to visually increase the presentation speed. These methods for visual-search may deliver a poor visual experience to the user due to the coarseness and inconsistency of the temporal difference between displayed images. Complicating matters even more is that devices operate differently depending on whether visual-search is performed in the forward or reverse direction. Finally, devices may require the video stream to be read at speeds that are multiple times higher than the standard rate required to playback the video in normal speed, challenging the device's subsystems.
Similarly, other typical functions performed or required to be performed during a playback session with a single or with multiple titles of content can often be limited in their ability to deliver a consistent, high-quality experience to the user.
Generally, the present invention provides a specific set of operating points that have been devised in order to maximize compatibility across both personal computer (PC) and consumer electronics (CE) platforms, resulting in high quality video at data rates that are encoded at up to 40% lower rates than those of the H.264 Level 4 data rates, while still maintaining a good visual quality level.
In particular, the effects of the CDN/CGN compression settings on visual-search are provided along with a method and system that increases the user-experience beyond traditional visual-search on optical-media. The method and system offer smooth visual-search capability in both the forward and reverse directions, operating at speeds from 2× to 200× and beyond, implementable on both PCs and CE devices that access content from optical disks or electronic sources across CDNs/CGNs. When combined, both of these features provide a high quality user-experience for content targeted at delivery over many types of networks.
In various embodiments provided herein consistent visual behavior during visual-search is provided, which operates equally well in both forward and reverse search directions, while simultaneously and substantially reducing the demands on the device in delivering the high-quality experience.
In one embodiment, a method of encoding a media file for playing back is provided. The method comprises extracting a video track from an original media file, where content is encoded in the video track; using the encoded content to encode an application enhancement track, where encoding the application enhancement track includes discarding at least some of the content; and creating a media file including the content from the original media file encoded in a video track and an accompanying application enhancement track.
In another embodiment, method of decoding a media file for playing back comprises obtaining a media file containing compressed content and an accompanying application enhancement track which is a subset of the compressed content; playing back the compressed content; and decoding frames of the application enhancement track at a rate proportional to a visual-search speed and from a location determined by the portion of the compressed content most recently played back.
In yet another embodiment, a system for playback of a media file comprises a media server and a client processor. The media server is configured to generate at least one application enhancement track from an original media file, the at least one application enhancement track having at least one frame in common with the original media file and being substantially smaller in file size than the original media file. The client processor is in network communication with the media server and is configured to send requests for the original media file to the media server. The media server is also configured to transmit the requested original media file along with the at least one application enhancement track.
In one other embodiment, a media player comprises a user interface configured to receive user instructions and a playback engine configured to decode a media file containing content and an accompanying application enhancement track, which is a subset of the content. The playback engine is configured to commence playback of the content in response to a user instruction received via the user interface. The playback engine is also configured to select portions of the application enhancement track to decode and playback in response to receipt of a user instruction received via the user interface and the portion of the content most recently played back by the playback engine.
The above-mentioned and other features of this invention and the manner of obtaining and using them will become more apparent, and will be best understood, by reference to the following description, taken in conjunction with the accompanying drawings. The drawings depict only typical embodiments of the invention and do not therefore limit its scope.
Generally, a digital video playback system is provided to ensure smooth playback regardless of the playback speed or direction that allows for the delivery of a high-quality experience to the user while allowing for the reduction in the processing load placed on a device.
Digitally compressed video is typically encoded using algorithms such as those defined by the MPEG committee (e.g. MPEG-2, MPEG-4 Part 2 and MPEG-4 Part 10). These algorithms encode images from the source material in to sequences of “key-frames” and “delta-frames”.
Key-frames contain all the data required to display a specific image from the source video. A delta-frame contains the difference data between one or more previously decoded images and the image it encodes. In general, there is a 1:1 mapping of source images and encoded frames. However, the 1:1 mapping does not hold true when the video is encoded at a different frame rate relative to the source video sequence. Thus, to decode frame F of a video sequence, all the frames that form the basis of the difference values contained in F must first be decoded. Applied recursively, this decode method ultimately requires a key-frame to start the decode process, since it is not based on any previously decoded image. Hence, the very first frame generated by typical encoders is a key-frame.
Since key-frames are encoded independently of other frames, they require more space to be stored (or more bandwidth during transfer), and put generically, can be attributed with a higher cost than delta-frames. For the purpose of describing the current inventions, a cost ratio of key-frames (K) versus delta-frames (D) 12 is used in which the ratio of 3:1 has been selected from observation of a range of encoded video content.
To perform rapid visual search through an encoded video bitstream, the decoder should increase its rate of decoding to match the speed requested by the user. However, this is not always practical due to performance bottlenecks in typical systems imposed by components such as disk I/O, memory I/O and the processor itself. Furthermore, reverse decode may be cumbersome, due to the nature of the encoding method which is optimized for playback in forward chronological order. Therefore most systems may ultimately rely on schemes such as dropping the processing of certain delta-frames or processing only key-frames in an order and at a rate determined by the direction and speed of visual search being conducted by the user.
Based on at least the above-noted factors, many video formats, such as DVD, ensure that there are key-frames regularly inserted throughout the duration of the multimedia content. In fact, the DVD format requires a key-frame to be inserted approximately every 600 ms throughout the content. This regular key-frame insertion delivers good visual-search functionality but at the cost of significantly increasing the file size due to the frequent use of key-frames. Schemes such as these essentially employ encoding the content into segments with the key-frame/delta-frame distribution similar to those illustrated by
Multiple efficient methods and systems to encode video content are described below in accordance with embodiments of the invention. One such encoding method and system insert key-frames at least every 10 seconds, 4 seconds for H.264 content, in the absence of scene changes, and additionally at scene-change locations. This ensures efficient encoding of typical produced content. For example, compared to the DVD method of encoding (approx. 2 key-frames per second) methods in accordance with embodiments of the invention yield much smaller file sizes.
The distribution of key-frames 61 and delta-frames 62 in content 60, encoded in accordance with an embodiment of the invention is illustrated in
In
Another functional area where most devices provide a compromised user-experience is when providing the user with a list of multiple content files. Many devices simply show the text of the file name and may show a static graphical icon for each piece of content. A higher-quality experience in such cases for example could be to provide animated icons which show all or some of the content's video sequences. However, this type of capability would require a playback engine capable of decoding many pieces of content simultaneously, which is typically not feasible.
Application Enhancement Tracks
Application Enhancement Tracks (AETs) are media tracks that are encoded in a manner that improves the performance of one or more applications, or application features. Generally, AETs are re-encoded versions of one or more tracks of the source content. AETs for the most part contain significantly less data than the original file, and the data is present in a form that can be accessed easily to enhance the user-experience when one of the enhanced applications or features is being run. The data in an AET is, generally, designed such that the utilizing application does not need to achieve performance greater than that required to process the content.
Hence, an AET designed to allow “6×” visual-search through a title originally encoded at a frame-rate of 30 frames per second (fps) could be generated by re-encoding the original video to a rate of 5 fps. Thus, even when the AET was utilized to deliver “6×” visual-search, the device would experience a load of less than or equal to “1×” of that required to decode the original video. To further enhance this example, the original video could be spatially scaled down in resolution to 50% of its original size also in each of the vertical and horizontal dimensions (leading to a factor of four reduction in the number of pixels per frame); in this case the device could perform “24×” visual-search without requiring more than “1×” original video decode performance.
The AETs can be recorded directly in to the same file as the content they offer enhancements for, or they can be stored in a separate file. When stored within the same file as the content they enhance, the enhancement track data can be interleaved with the content, or may be located in one or more contiguous blocks anywhere in the file.
Video AETs
As an example, Video AETs designed to improve the performance of visual-search and/or the displaying of dynamic icons by a preview engine can be created by optionally scaling the content to different spatial and temporal resolutions, and then re-encoding this scaled version into a data stream with a regular pattern of key-frames and, in several embodiments, delta-frames.
In these examples, during visual-search each frame in the AET is decoded and displayed at a rate proportional to the visual-search speed requested by the user. When the decode and display rate required by the visual-search speed exceeds the device's capabilities, the device can change mode, for example, from processing all frames to processing as many key-frames as required or pre-selected to support the required search speed. Since the key-frames have been placed at regular intervals throughout the AET, the visual performance of skipping from key-frame to key-frame will be consistent.
In a similar scheme as illustrated in
Referring again to
Such Video AETs may also contain information in the track's data that relate the encoded video frames with the corresponding frame or time-period in the original title. This information may be encoded as time-stamps relative to the time-line of the content, or a file-position offset, or any other mechanism.
The illustrated AETs can also enhance the content-preview experience by virtue of the same properties exploited for visual-search. In typical content-preview modes, a reduced resolution version of the content is displayed along with other information that could be used to identify the content, such as the file name, duration, etc.
By using a Video AET, such as that illustrated in
Audio and Other Media AETs
In a similar fashion to Video AETs, other AETs can be created by following the same principles as those employed in creating the Video AETs. Such media tracks suitable for AETs include audio and subtitle tracks. Example methods for creating Audio AETs 170, 175, from an audio track are illustrated in
Generating AETs
In the case when a user wishes to add one or more AETs to a piece of content, then the process of creating and storing (“generating”) the AET(s) will take time and processing power to perform. If this time and processing requirement are imposed on the user at the time when the user wishes to utilize the content, then this would not constitute a good user-experience since the user would be forced to wait before continuing with whatever operation that was initiated.
In one embodiment, the need to wait for the AET generation process can be removed or reduced by performing AET generation in parallel with another operation such as downloading, burning to disk, or first playback. Another embodiment would be to integrate the AET into a “background task” that executes when the user's computer is not being actively used by the user, thus allowing a user's personal catalogue of content to be processed while the user performs other applications with the computer.
In
For example, through experimental testing, by taking 25% of the spatial data (pixels) and 21% of the temporal data (frames) from a source, nearly 95% of the original data can be discarded. The resulting frames are all encoded as key-frames, which are known to be extremely inefficient. However, since the source being encoded is only 5% the data volume of the original, it is has been discovered that the video AET file can be anywhere from a few percent to 10% the size of the original content (a general rule of thumb of 7.5% can be used in the estimation of the encoded video AET file-size).
The following example as determined through experimental testing is also provided to at least illustrate the visual search enhancement relative to the file size. A “normally encoded” (i.e., one or more key-frames per second) movie (i.e., 23.976 fps) of resolution 1920×816 has a file-size of about 10 GB. The movie is re-mastered with a key-frame rate of one or more key-frames every 4 seconds, reducing the file-size to 8.23 GB, i.e., a 17.7% reduction. The content is then sub-sampled to a resolution of 480×272 and a frame-rate of 5 fps (25% and 21% respectively) to generate an AET source. The AET source is then encoded as key-frames only, resulting in an AET file-size of about 618 MB. The combined file-size of the “best-encoding” with “visual-search enhancement” is 8.85 GB. This is a saving of 1.15 GB from the original file-size and includes improved visual-search performance. In addition, advanced video players and media managers can use the AET to show animated content previews. In this case, a device could perform up to “40×” visual-search (in either the forward or reverse time-line) without requiring more than “1×” original video system performance. Higher speeds of visual-search can be achieved by skipping key-frames as needed to keep the system performance within the limits of the device (or software) performing the visual-search.
A playback system in accordance with an embodiment of the invention is shown in
The devices are configured with client applications that can request all or portions of media files from the media server 192 for playing. The client application can be implemented in software, in firmware, in hardware or in a combination of the above. In many embodiments, the device plays media from downloaded media files. In several embodiments, the device provides one or more outputs that enable another device to play the media. In one example, when the media file includes one or more application enhancement tracks, a device configured with a client application in accordance with an embodiment of the invention can use the AETs to provide a user with trick-play functions. When a user provides a trick-play instruction, the device uses the AETs to execute the trick-play function. In a number of embodiments, the client application requests all or portions of the media file using a transport protocol that allows for downloading of specific byte ranges within the media file. One such protocol is the HTTP 1.1 protocol published by The Internet Society or BitTorrent available from www.bittorrent.org. In other embodiments other protocols and/or mechanisms can be used to obtain specific portions of the media file from the media server.
In
In a number of embodiments, the corresponding position is located utilizing an index. The media player from this location sequentially decodes or plays the frames in the AET until a user request to stop (215). Through 2×, 4×, etc. fast forward user requests, the speed in which the AET is decoded or displayed can also be varied by the user. Rewind requests operate in the same manner but in a direction opposite of the forward requests. At a user “play” request, the media player determines the time and position of the AET relative to the content and from this location sequentially decodes the frames in the content until another user request is received.
In many embodiments, locating a frame with a timestamp corresponding to a frame within an AET can involve locating a key-frame with the closest timestamp preceding the desired timestamp within an index contained within the multimedia file, decoding the key-frame and decoding the difference frames between the key-frame and the difference frame with the desired timestamp. At which point, the presentation can commence playing using the higher resolution content. In many embodiments, other techniques are used to seamlessly transition from viewing low resolution content in an AET during a trick-play mode and the higher resolution tracks that contain the full content within the multimedia file.
In the case where timestamps are not present in the media file, e.g., audio video interleaved (AVI) files, locating the start point to play the higher resolution content is based on the position of the AET within the media file. In one embodiment, a timestamp although not present in the AET or content is derived from the frame count and the associated frame rate. Using this derived timestamp, a frame within the high resolution content that corresponds to the AET frame or closest AET frame and vice versa can be located.
Generally, application enhancement tracks are derived from the main content that they are associated with. They are typically encoded to aid the performance of one or more functions related to the content, such as visual-search, or content-preview, and can be stored in the same file as the main content, or in one or more separate files. AETs provide many factors of improved performance while incurring only a slight increase to the cost associated with the main content (storage, transfer speed, etc.). In fact, since the AET is a fractional cost of the main content, only that track may be needed to perform certain functions and can therefore reduce the overall cost of viewing, or otherwise interacting with, the content.
An AET is not tied to any single type of media data or encoding standard, and is in fact equally applicable to many widely used video standards (MPEG-2, MPEG-4 Part 2 and H.264) as well as widely available media formats (DivX, DVD, Blu-ray Disc, HD-DVD).
In several embodiments that implement the methods and systems described above, scalable speeds of visual-search can be conducted in both the forward and reverse directions while incurring additional file size costs of only 5% relative to the size of the main content file. Furthermore, these same tracks can be utilized for content-preview animations.
Finally, it should be understood that while preferred embodiments of the foregoing invention have been set forth for purposes of illustration, the foregoing description should not be deemed a limitation of the invention herein. Accordingly, various modifications, adaptations and alternatives may occur to one skilled in the art without departing from the spirit and scope of the present invention.
This application claims the benefit of U.S. Provisional Patent Application No. 61/018,628 filed Jan. 2, 2008, the disclosure of which is hereby incorporated by reference as if set forth in full herein.
Number | Name | Date | Kind |
---|---|---|---|
5361332 | Yoshida et al. | Nov 1994 | A |
5404436 | Hamilton | Apr 1995 | A |
5479303 | Suzuki et al. | Dec 1995 | A |
5502766 | Boebert et al. | Mar 1996 | A |
5509070 | Schull | Apr 1996 | A |
5715403 | Stefik | Feb 1998 | A |
5717816 | Boyce et al. | Feb 1998 | A |
5754648 | Ryan et al. | May 1998 | A |
5805700 | Nardone et al. | Sep 1998 | A |
5867625 | McLaren | Feb 1999 | A |
5887110 | Sakamoto et al. | Mar 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5946446 | Yanagihara | Aug 1999 | A |
5999812 | Himsworth | Dec 1999 | A |
6018611 | Nogami et al. | Jan 2000 | A |
6031622 | Ristow et al. | Feb 2000 | A |
6044469 | Horstmann | Mar 2000 | A |
6047100 | McLaren | Apr 2000 | A |
6058240 | McLaren | May 2000 | A |
6064794 | Mclaren et al. | May 2000 | A |
6097877 | Katayama et al. | Aug 2000 | A |
6141754 | Choy | Oct 2000 | A |
6155840 | Sallette | Dec 2000 | A |
6175921 | Rosen | Jan 2001 | B1 |
6195388 | Choi et al. | Feb 2001 | B1 |
6222981 | Rijckaert | Apr 2001 | B1 |
6282653 | Berstis et al. | Aug 2001 | B1 |
6289450 | Pensak et al. | Sep 2001 | B1 |
6292621 | Tanaka et al. | Sep 2001 | B1 |
6389218 | Gordon et al. | May 2002 | B2 |
6418270 | Steenhof et al. | Jul 2002 | B1 |
6449719 | Baker | Sep 2002 | B1 |
6466671 | Maillard et al. | Oct 2002 | B1 |
6466733 | Kim | Oct 2002 | B1 |
6510513 | Danieli | Jan 2003 | B1 |
6510554 | Gordon et al. | Jan 2003 | B1 |
6621979 | Eerenberg | Sep 2003 | B1 |
6658056 | Duruöz et al. | Dec 2003 | B1 |
6807306 | Girgensohn et al. | Oct 2004 | B1 |
6810389 | Meyer | Oct 2004 | B1 |
6859496 | Boroczky et al. | Feb 2005 | B1 |
6956901 | Boroczky et al. | Oct 2005 | B2 |
6965724 | Boccon-Gibod et al. | Nov 2005 | B1 |
6965993 | Baker | Nov 2005 | B2 |
7007170 | Morten | Feb 2006 | B2 |
7023924 | Keller et al. | Apr 2006 | B1 |
7043473 | Rassool et al. | May 2006 | B1 |
7150045 | Koelle et al. | Dec 2006 | B2 |
7151832 | Fetkovich et al. | Dec 2006 | B1 |
7151833 | Candelore et al. | Dec 2006 | B2 |
7165175 | Kollmyer et al. | Jan 2007 | B1 |
7185363 | Narin et al. | Feb 2007 | B1 |
7231132 | Davenport | Jun 2007 | B1 |
7242772 | Tehranchi | Jul 2007 | B1 |
7328345 | Morten et al. | Feb 2008 | B2 |
7349886 | Morten et al. | Mar 2008 | B2 |
7356143 | Morten | Apr 2008 | B2 |
7376831 | Kollmyer et al. | May 2008 | B2 |
7406174 | Palmer | Jul 2008 | B2 |
7472280 | Giobbi | Dec 2008 | B2 |
7478325 | Foehr | Jan 2009 | B2 |
7484103 | Woo et al. | Jan 2009 | B2 |
7526450 | Hughes et al. | Apr 2009 | B2 |
7594271 | Zhuk et al. | Sep 2009 | B2 |
7640435 | Morten | Dec 2009 | B2 |
7720352 | Belknap et al. | May 2010 | B2 |
7817608 | Rassool et al. | Oct 2010 | B2 |
7962942 | Craner | Jun 2011 | B1 |
7991156 | Miller | Aug 2011 | B1 |
8023562 | Zheludkov et al. | Sep 2011 | B2 |
8046453 | Olaiya | Oct 2011 | B2 |
8054880 | Yu et al. | Nov 2011 | B2 |
8065708 | Smyth et al. | Nov 2011 | B1 |
8201264 | Grab et al. | Jun 2012 | B2 |
8225061 | Greenebaum | Jul 2012 | B2 |
8233768 | Soroushian et al. | Jul 2012 | B2 |
8249168 | Graves | Aug 2012 | B2 |
8261356 | Choi et al. | Sep 2012 | B2 |
8265168 | Masterson et al. | Sep 2012 | B1 |
8270473 | Chen et al. | Sep 2012 | B2 |
8270819 | Vannier | Sep 2012 | B2 |
8289338 | Priyadarshi et al. | Oct 2012 | B2 |
8291460 | Peacock | Oct 2012 | B1 |
8311115 | Gu et al. | Nov 2012 | B2 |
8321556 | Chatterjee et al. | Nov 2012 | B1 |
8386621 | Park | Feb 2013 | B2 |
8412841 | Swaminathan et al. | Apr 2013 | B1 |
8456380 | Pagan | Jun 2013 | B2 |
8472792 | Butt | Jun 2013 | B2 |
8515265 | Kwon et al. | Aug 2013 | B2 |
8781122 | Chan et al. | Jul 2014 | B2 |
20010036355 | Kelly et al. | Nov 2001 | A1 |
20010046299 | Wasilewski et al. | Nov 2001 | A1 |
20020051494 | Yamaguchi et al. | May 2002 | A1 |
20020110193 | Yoo et al. | Aug 2002 | A1 |
20020136298 | Anantharamu et al. | Sep 2002 | A1 |
20030001964 | Masukura et al. | Jan 2003 | A1 |
20030002578 | Tsukagoshi et al. | Jan 2003 | A1 |
20030035488 | Barrau | Feb 2003 | A1 |
20030035545 | Jiang | Feb 2003 | A1 |
20030035546 | Jiang et al. | Feb 2003 | A1 |
20030093799 | Kauffman et al. | May 2003 | A1 |
20030152370 | Otomo et al. | Aug 2003 | A1 |
20030163824 | Gordon et al. | Aug 2003 | A1 |
20030174844 | Candelore | Sep 2003 | A1 |
20030185542 | McVeigh et al. | Oct 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20030231863 | Eerenberg et al. | Dec 2003 | A1 |
20030231867 | Gates et al. | Dec 2003 | A1 |
20030233464 | Walpole et al. | Dec 2003 | A1 |
20030236836 | Borthwick | Dec 2003 | A1 |
20030236907 | Stewart et al. | Dec 2003 | A1 |
20040031058 | Reisman | Feb 2004 | A1 |
20040081333 | Grab et al. | Apr 2004 | A1 |
20040093618 | Baldwin et al. | May 2004 | A1 |
20040105549 | Suzuki et al. | Jun 2004 | A1 |
20040136698 | Mock | Jul 2004 | A1 |
20040139335 | Diamand et al. | Jul 2004 | A1 |
20040158878 | Ratnakar et al. | Aug 2004 | A1 |
20040184534 | Wang | Sep 2004 | A1 |
20040255115 | DeMello et al. | Dec 2004 | A1 |
20050038826 | Bae et al. | Feb 2005 | A1 |
20050071280 | Irwin | Mar 2005 | A1 |
20050114896 | Hug | May 2005 | A1 |
20050193070 | Brown et al. | Sep 2005 | A1 |
20050193322 | Lamkin et al. | Sep 2005 | A1 |
20050204289 | Mohammed et al. | Sep 2005 | A1 |
20050207442 | Zoest et al. | Sep 2005 | A1 |
20050207578 | Matsuyama et al. | Sep 2005 | A1 |
20050273695 | Schnurr | Dec 2005 | A1 |
20050275656 | Corbin et al. | Dec 2005 | A1 |
20060036549 | Wu | Feb 2006 | A1 |
20060037057 | Xu | Feb 2006 | A1 |
20060052095 | Vazvan | Mar 2006 | A1 |
20060053080 | Edmonson et al. | Mar 2006 | A1 |
20060064605 | Giobbi | Mar 2006 | A1 |
20060078301 | Ikeda et al. | Apr 2006 | A1 |
20060129909 | Butt et al. | Jun 2006 | A1 |
20060173887 | Breitfeld et al. | Aug 2006 | A1 |
20060245727 | Nakano et al. | Nov 2006 | A1 |
20060259588 | Lerman et al. | Nov 2006 | A1 |
20060263056 | Lin et al. | Nov 2006 | A1 |
20070031110 | Rijckaert | Feb 2007 | A1 |
20070047901 | Ando et al. | Mar 2007 | A1 |
20070083617 | Chakrabarti et al. | Apr 2007 | A1 |
20070086528 | Mauchly et al. | Apr 2007 | A1 |
20070136817 | Nguyen | Jun 2007 | A1 |
20070140647 | Kusunoki et al. | Jun 2007 | A1 |
20070154165 | Hemmeryckx-Deleersnijder et al. | Jul 2007 | A1 |
20070168541 | Gupta et al. | Jul 2007 | A1 |
20070168542 | Gupta et al. | Jul 2007 | A1 |
20070178933 | Nelson | Aug 2007 | A1 |
20070180125 | Knowles et al. | Aug 2007 | A1 |
20070192810 | Pritchett et al. | Aug 2007 | A1 |
20070217759 | Dodd | Sep 2007 | A1 |
20070234391 | Hunter et al. | Oct 2007 | A1 |
20070239839 | Buday et al. | Oct 2007 | A1 |
20070255940 | Ueno | Nov 2007 | A1 |
20070292107 | Yahata et al. | Dec 2007 | A1 |
20080120389 | Bassali et al. | May 2008 | A1 |
20080126248 | Lee et al. | May 2008 | A1 |
20080137736 | Richardson et al. | Jun 2008 | A1 |
20080187283 | Takahashi | Aug 2008 | A1 |
20080192818 | DiPietro et al. | Aug 2008 | A1 |
20080195744 | Bowra et al. | Aug 2008 | A1 |
20080256105 | Nogawa et al. | Oct 2008 | A1 |
20080263354 | Beuque | Oct 2008 | A1 |
20080279535 | Haque et al. | Nov 2008 | A1 |
20080310454 | Bellwood et al. | Dec 2008 | A1 |
20080310496 | Fang | Dec 2008 | A1 |
20090031220 | Tranchant et al. | Jan 2009 | A1 |
20090048852 | Burns et al. | Feb 2009 | A1 |
20090055546 | Jung et al. | Feb 2009 | A1 |
20090060452 | Chaudhri | Mar 2009 | A1 |
20090066839 | Jung et al. | Mar 2009 | A1 |
20090097644 | Haruki | Apr 2009 | A1 |
20090132599 | Soroushian et al. | May 2009 | A1 |
20090132721 | Soroushian et al. | May 2009 | A1 |
20090132824 | Terada et al. | May 2009 | A1 |
20090150557 | Wormley et al. | Jun 2009 | A1 |
20090169181 | Priyadarshi et al. | Jul 2009 | A1 |
20090178090 | Oztaskent | Jul 2009 | A1 |
20090196139 | Bates et al. | Aug 2009 | A1 |
20090201988 | Gazier et al. | Aug 2009 | A1 |
20090226148 | Nesvadba et al. | Sep 2009 | A1 |
20090290706 | Amini et al. | Nov 2009 | A1 |
20090293116 | DeMello | Nov 2009 | A1 |
20090303241 | Priyadarshi et al. | Dec 2009 | A1 |
20090307258 | Priyadarshi et al. | Dec 2009 | A1 |
20090307267 | Chen et al. | Dec 2009 | A1 |
20090310933 | Lee | Dec 2009 | A1 |
20090313544 | Wood et al. | Dec 2009 | A1 |
20090313564 | Rottler et al. | Dec 2009 | A1 |
20090328124 | Khouzam et al. | Dec 2009 | A1 |
20090328228 | Schnell | Dec 2009 | A1 |
20100040351 | Toma et al. | Feb 2010 | A1 |
20100074324 | Qian et al. | Mar 2010 | A1 |
20100083322 | Rouse | Apr 2010 | A1 |
20100095121 | Shetty et al. | Apr 2010 | A1 |
20100107260 | Orrell et al. | Apr 2010 | A1 |
20100111192 | Graves | May 2010 | A1 |
20100142917 | Isaji | Jun 2010 | A1 |
20100158109 | Dahlby et al. | Jun 2010 | A1 |
20100186092 | Takechi et al. | Jul 2010 | A1 |
20100189183 | Gu et al. | Jul 2010 | A1 |
20100228795 | Hahn | Sep 2010 | A1 |
20100235472 | Sood et al. | Sep 2010 | A1 |
20110047209 | Lindholm et al. | Feb 2011 | A1 |
20110066673 | Outlaw | Mar 2011 | A1 |
20110080940 | Bocharov et al. | Apr 2011 | A1 |
20110082924 | Gopalakrishnan | Apr 2011 | A1 |
20110126191 | Hughes et al. | May 2011 | A1 |
20110135090 | Chan et al. | Jun 2011 | A1 |
20110142415 | Rhyu | Jun 2011 | A1 |
20110145726 | Wei et al. | Jun 2011 | A1 |
20110149753 | Bapst et al. | Jun 2011 | A1 |
20110150100 | Abadir | Jun 2011 | A1 |
20110153785 | Minborg et al. | Jun 2011 | A1 |
20110197237 | Turner | Aug 2011 | A1 |
20110225315 | Wexler et al. | Sep 2011 | A1 |
20110225417 | Maharajh et al. | Sep 2011 | A1 |
20110239078 | Luby et al. | Sep 2011 | A1 |
20110246657 | Glow | Oct 2011 | A1 |
20110246659 | Bouazizi | Oct 2011 | A1 |
20110268178 | Park | Nov 2011 | A1 |
20110302319 | Ha et al. | Dec 2011 | A1 |
20110305273 | He et al. | Dec 2011 | A1 |
20110314176 | Frojdh et al. | Dec 2011 | A1 |
20120023251 | Pyle et al. | Jan 2012 | A1 |
20120093214 | Urbach | Apr 2012 | A1 |
20120170642 | Braness et al. | Jul 2012 | A1 |
20120170643 | Soroushian et al. | Jul 2012 | A1 |
20120170906 | Soroushian et al. | Jul 2012 | A1 |
20120170915 | Braness et al. | Jul 2012 | A1 |
20120173751 | Braness et al. | Jul 2012 | A1 |
20120179834 | Van Der et al. | Jul 2012 | A1 |
20120254455 | Adimatyam et al. | Oct 2012 | A1 |
20120260277 | Kosciewicz | Oct 2012 | A1 |
20120278496 | Hsu | Nov 2012 | A1 |
20120307883 | Graves | Dec 2012 | A1 |
20120311094 | Biderman et al. | Dec 2012 | A1 |
20130019107 | Grab et al. | Jan 2013 | A1 |
20130044821 | Braness et al. | Feb 2013 | A1 |
20130046902 | Villegas Nuñez et al. | Feb 2013 | A1 |
20130061040 | Kiefer et al. | Mar 2013 | A1 |
20130061045 | Kiefer et al. | Mar 2013 | A1 |
20130114944 | Soroushian et al. | May 2013 | A1 |
20130166765 | Kaufman | Jun 2013 | A1 |
20130166906 | Swaminathan et al. | Jun 2013 | A1 |
20140101722 | Moore | Apr 2014 | A1 |
20140189065 | Schaar et al. | Jul 2014 | A1 |
20140201382 | Shivadas et al. | Jul 2014 | A1 |
20140250473 | Braness et al. | Sep 2014 | A1 |
20140359678 | Shivadas et al. | Dec 2014 | A1 |
20140359679 | Shivadas et al. | Dec 2014 | A1 |
20140359680 | Shivadas et al. | Dec 2014 | A1 |
Number | Date | Country |
---|---|---|
1169229 | Dec 1997 | CN |
813167 | Dec 1997 | EP |
936812 | Aug 1999 | EP |
1553779 | Jul 2005 | EP |
1553779 | Jul 2005 | EP |
08046902 | Feb 1996 | JP |
8111842 | Apr 1996 | JP |
09-037225 | Feb 1997 | JP |
11164307 | Jun 1999 | JP |
11275576 | Oct 1999 | JP |
2001346165 | Dec 2001 | JP |
2002518898 | Jun 2002 | JP |
2004515941 | May 2004 | JP |
2004187161 | Jul 2004 | JP |
2007235690 | Sep 2007 | JP |
669616 | Jan 2007 | KR |
9613121 | May 1996 | WO |
9613121 | May 1996 | WO |
9965239 | Dec 1999 | WO |
0165762 | Sep 2001 | WO |
0235832 | May 2002 | WO |
0237210 | May 2002 | WO |
02054196 | Jul 2002 | WO |
2004102571 | Nov 2004 | WO |
2009065137 | May 2009 | WO |
2010060106 | May 2010 | WO |
2010122447 | Oct 2010 | WO |
2011068668 | Jun 2011 | WO |
2011103364 | Aug 2011 | WO |
2012094171 | Jul 2012 | WO |
2012094181 | Jul 2012 | WO |
2012094189 | Jul 2012 | WO |
2013032518 | Mar 2013 | WO |
2013032518 | Sep 2013 | WO |
Entry |
---|
Tan et al., “Video Transcoder for Fast Forward/Reverse Video Playback”, IEEE ICIP, pp. I-713 to I-716, 2002. |
Author Unknown, “Blu-ray Disc—Blu-ray Disc—Wikipedia, the free encyclopedia”, printed Oct. 30, 2008 from http://en.wikipedia.org/wiki/Blu-ray—Disc, 11 pgs. |
Author Unknown, “Blu-ray Movie Bitrates Here—Blu-ray Forum”, printed Oct. 30, 2008 from http://forum.blu-ray.com/showthread.php?t=3338, 6 pgs. |
Author Unknown, “O'Reilly—802.11 Wireless Networks: The Definitive Guide, Second Edition”, printed Oct. 30, 2008 from http://oreilly.com/catalog/9780596100520, 2 pgs. |
Author Unknown, “Turbo-Charge Your Internet and PC Performance”, printed Oct. 30, 2008 from Speedtest.net—The Global Broadband Speed Test, 1 pg. |
Author Unknown, “When is 54 Not Equal to 54? A Look at 802.11a, b and g Throughput”, printed Oct. 30, 2008 from http://www.oreillynet..com/pub/a/wireless/2003/08/08/wireless?throughput.htm., 4 pgs. |
Author Unknown, “White paper, The New Mainstream Wirless LAN Standard”, Broadcom Corporation, Jul. 2003, 12 pgs. |
Garg et al., “An Experimental Study of Throughput for UDP and VoIP Traffic in IEEE 802.11b Networks”, Wireless Communications and Networkings, Mar. 2003, pp. 1748-1753. |
Kozintsev et al., “Improving last-hop multicast streaming video over 802.11”, Workshop on Broadband Wireless Multimedia, Oct. 2004, pp. 1-10. |
Papagiannaki et al., “Experimental Characterization of Home Wireless Networks and Design Implications”, INFOCOM 2006, 25th IEEE International Conference of Computer Communications, Proceedings, Apr. 2006, 13 pgs. |
Wang et al., “Image Quality Assessment: From Error Visibility to Structural Similarity”, IEEE Transactions on Image Processing, Apr. 2004, vol. 13, No. 4, pp. 600-612. |
International Search Report for International Application No. PCT/US2008/087999, date completed Feb. 7, 2009, date mailed Mar. 19, 2009, 2 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US2008/087999, date completed Feb. 7, 2009, date mailed Mar. 19, 2009, 4 pgs. |
European Search Report Application No. EP 08870152, Search Completed May 19, 2011, Mailed May 26, 2011, 10 pgs. |
“IBM Closes Cryptolopes Unit,” Dec. 17, 1997, CNET News, Retrieved from http://news.cnet.com/IBM-closes-Cryptolopes-unit/2100-1001—3206465.html, 3 pages. |
“Information Technology—Coding of Audio Visual Objects—Part 2: Visual” International Standard, ISO/IEC 14496-2, Third Edition, Jun. 1, 2004, pp. 1-724. |
“Supported Media Formats”, Supported Media Formats, Android Developers, Nov. 27, 2013, 3 pages. |
Cloakware Corporation, “Protecting Digital Content Using Cloakware Code Transformation Technology”, Version 1.2, May 2002, pp. 1-10. |
European Search Report for Application 11855103.5, search completed Jun. 26, 2014, 9 pages. |
European Search Report for Application 11855237.1, search completed Jun. 12, 2014, 9 pages. |
Federal Computer Week, “Tool Speeds Info to Vehicles”, Jul. 25, 1999, 5 pages. |
HTTP Live Streaming Overview, Networking & Internet, Apple, Inc., Apr. 1, 2011, 38 pages. |
Informationweek: Front End: Daily Dose, “Internet on Wheels”, Jul. 20, 1999, 3 pages. |
International Preliminary Report on Patentability for International Application No. PCT/US2011/068276, International Filing Date Dec. 31, 2011, Issue Date Mar. 4, 2014, 23 pages. |
International Search Report and Written Opinion for International Application No. PCT/US2010/56733, International Filing Date Nov. 15, 2010, Report Completed Jan. 3, 2011, Mailed Jan. 14, 2011, 9 pages. |
International Search Report and Written Opinion for International Application PCT/US2011/066927, International Filing Date Dec. 22, 2011, Report Completed Apr. 3, 2012, Mailed Apr. 20, 2012, 14 pages. |
International Search Report and Written Opinion for International Application PCT/US2011/067167, International Filing Date Dec. 23, 2011, Report Completed Jun. 19, 2012, Mailed Jul. 2, 2012, 11 pages. |
International Search Report and Written Opinion for International Application PCT/US2011/068276, International Filing Date Dec. 31, 2011, Report completed Jun. 19, 2013, Mailed Jul. 8, 2013, 24 pages. |
International Search Report for International Application No. PCT/US2005/025845 International Filing Date Jul. 21, 2005, Report Completed Feb. 5, 2007, Mailed May 10, 2007, 3 pages. |
International Search Report for International Application No. PCT/US2007/063950 International Filing Date Mar. 14, 2007, Report Completed Feb. 19, 2008, Mailed Mar. 19, 2008, 3 pages. |
ITS International, “Fleet System Opts for Mobile Server”, Aug. 26, 1999, 1 page. |
Microsoft, Microsoft Media Platform: Player Framework, “Silverlight Media Framework v1.1”, 2 pages. |
Microsoft, Microsoft Media Platform: Player Framework, “Microsoft Media Platform: Player Framework v2.5 (formerly Silverlight Media Framework)”, 2 pages. |
The Official Microsoft IIS Site, Smooth Streaming Client, 4 pages. |
Written Opinion for International Application No. PT/US2005/025845, International Filing Date Jul. 21, 2005, Report Completed Feb. 5, 2007, Mailed May 10, 2007, 5 pages. |
Written Opinion for International Application No. PCT/US2007/063950, International Filing Date Mar. 14, 2007, Report Completed Mar. 1, 2008, Mailed Mar. 19, 2008, 6 pages. |
“Adaptive Streaming Comparison”, Jan. 28, 2010, 5 pages. |
“Best Practices for Multi-Device Transcoding”, Kaltura Open Source Video, 13 pages. |
“IBM Spearheading Intellectual Property Protection Technology for Information on the Internet; Cryptolope Containers Have Arrived”, May 1, 1996, Business Wire, Retrieved from http://www.thefreelibrary.com/IBM+Spearheading+Intellectual+Property+Protection+Technology+for . . . -a018239381, 6 pages. |
“Netflix turns on subtitles for PC, Mac streaming”, 3 pages. |
Supplementary European Search Report for Application No. EP 10834935, International Filing Date Nov. 15, 2010, Search Completed May 27, 2014, 9 pgs. |
“Thread: SSME (Smooth Streaming Medial Element) config.xml review (Smooth Streaming Client configuration file)”, 3 pages. |
“Transcoding Best Practices”, From movideo, Nov. 27, 2013, 5 pages. |
Inlet Technologies, “The World's First Live Smooth Streaming Event: The French Open”, 2 pages. |
Kim, Kyuheon, “MPEG-2 ES/PES/TS/PSI”, Kyung-Hee University, Oct. 4, 2010, 66 pages. |
Kurzke et al., “Get Your Content Onto Google TV”, Google, Retrieved from: http://commondatastorage.googleapis.com/io2012/presentations/live/%20to%20website/1300.pdf, 58 pages. |
Lang, “Expression Encoder, Best Practices for Live Smooth Streaming Broadcasting”, Microsoft Corporation, 20 pages. |
Levkov, “Mobile Encoding Guidelines for Android Powered Devices”, Adobe Systems Inc., Addendum B, source and date unknown, 42 pages. |
MSDN, “Adaptive streaming, Expression Studio 2.0”, 2 pages. |
Nelson, “Arithmetic Coding+Statistical Modeling=Data Compression: Part 1—Arithmetic Coding”, Doctor Dobb's Journal, Feb. 1991, printed from http://www.dogma.net/markn/articles/arith/art1.htm, Jul. 2, 2003, 12 pages. |
Nelson, “Smooth Streaming Deployment Guide”, Microsoft Expression Encoder, Aug. 2010, 66 pages. |
Nelson, Michael, “IBM's Cryptolopes,” Complex Objects in Digital Libraries Course, Spring 2001, Retrieved from http://www.cs.odu.edu/˜mln/teaching/unc/inls210/?method=display&pkg—name=cryptolopes.pkg&element—name=cryptolopes.ppt, 12 pages. |
Noe, “Matroska File Format (under construction!)”, Jun. 24, 2007, XP002617671, Retrieved from: http://web.archive.org/web/20070821155146/www.matroska.org/technical/specs/matroska.pdf, Retrieved on Jan. 19, 2011, pp. 1-51. |
Ozer, “The 2012 Encoding and Transcoding Buyers' Guide”, Streamingmedia.com, Retrieved from: http://www.streamingmedia.com/Articles/Editorial/Featured-Articles/The-2012-Encoding-and-Transcoding-Buyers-Guide-84210.aspx, 2012, 8 pages. |
“Using HTTP Live Streaming”, iOS Developer Library, Retrieved from: http://developer.apple.com/library/ios/#documentation/networkinginternet/conceptual/streamingmediaguide/UsingHTTPLiveStreaming/UsingHTTPLiveStreaming.html#//apple—ref/doc/uid/TP40008332-CH102-SW1, 10 pages. |
Akhshabi et al., “An Experimental Evaluation of Rate-Adaptation Algorithms in Adaptive Streaming over HTTP”, MMSys'11, Feb. 24-25, 2011, 12 pages. |
Anonymous, “Method for the Encoding of a Compressed Video Sequence Derived from the Same Video Sequence Compressed at a Different Bit Rate Without Loss of Data”, ip.com, ip.com No. IPCOM000008165D, May 22, 2012, pp. 1-9. |
Author Unknown, “Entropy and Source Coding (Compression)”, TCOM 570, Sep. 1999, pp. 1-22. |
Author Unknown, “MPEG-4 Video Encoder: Based on International Standard ISO/IEC 14496-2”, Patni Computer Systems, Ltd., Publication date unknown, 15 pages. |
Author Unknown, “Tunneling QuickTime RTSP and RTP over HTTP”, Published by Apple Computer, Inc.: 1999 (month unknown), 6 pages. |
Blasiak, Darek, Video Transrating and Transcoding: Overview of Video Transrating and Transcoding Technologies, Ingenient Technologies, TI Developer Conference, Aug. 6-8, 2002, 22 pages. |
Deutscher, “IIS Transform Manager Beta—Using the MP4 to Smooth Task”, Retrieved from: https://web.archive.org/web/20130328111303/http://blog.johndeutscher.com/category/smooth-streaming, Blog post of Apr. 17, 2010, 14 pages. |
Gannes, “The Lowdown on Apple's HTTP Adaptive Bitrate Streaming”, GigaOM, Jun. 10, 2009, 12 pages. |
Ghosh, “Enhancing Silverlight Video Experiences with Contextual Data”, Retrieved from: http://msdn.microsoft.com/en-us/magazine/ee336025.aspx, 15 pages. |
Inlet Technologies, “Adaptive Delivery to iDevices”, 2 pages. |
Inlet Technologies, “Adaptive Delivery to iPhone 3.0”, 2 pages. |
Inlet Technologies, “HTTP versus RTMP”, 3 pages. |
Pantos, “HTTP Live Streaming, draft-pantos-http-live-streaming-10”, IETF Tools, Oct. 15, 2012, Retrieved from: http://tools.ietf.org/html/draft-pantos-http-live-streaming-10, 37 pages. |
Pantos, “HTTP Live Streaming: draft-pantos-http-live-streaming-06”, Published by the Internet Engineering Task Force (IETF), Mar. 31, 2011, 24 pages. |
Phamdo, Nam, “Theory of Data Compression”, printed from http://www.datacompression.com/theoroy.html on Oct. 10, 2003, 12 pages. |
RGB Networks, “Comparing Adaptive HTTP Streaming Technologies”, Nov. 2011, Retrieved from: http://btreport.net/wp-content/uploads/2012/02/RGB-Adaptive-HTTP-Streaming-Comparison-1211-01.pdf, 20 pages. |
Schulzrinne, H. et al., “Real Time Streaming Protocol 2.0 (RTSP): draft-ietfmmusic-rfc2326bis-27”, MMUSIC Working Group of the Internet Engineering Task Force (IETF), Mar. 9, 2011, 296 pages. |
Siglin, “HTTP Streaming: What You Need to Know”, streamingmedia.com, 2010, 16 pages. |
Siglin, “Unifying Global Video Strategies, MP4 File Fragmentation for Broadcast, Mobile and Web Delivery”, Nov. 16, 2011, 16 pages. |
Wu, Feng et al., “Next Generation Mobile Multimedia Communications: Media Codec and Media Transport Perspectives”, In China Communications, Oct. 2006, pp. 30-44. |
Zambelli, Alex, “IIS Smooth Streaming Technical Overview”, Microsoft Corporation, Mar. 2009. |
Number | Date | Country | |
---|---|---|---|
20090169181 A1 | Jul 2009 | US |
Number | Date | Country | |
---|---|---|---|
61018628 | Jan 2008 | US |