The present invention generally relates to the field of live encoding of adaptive bitrate streams from live input streams. Specifically, the present invention relates to several techniques for optimizing and improving the live encoding of adaptive bitrate streams from live input streams.
Streaming technology has advanced to the point of supporting live over the top streaming. Live events can now be viewed from adaptive bitrate streams generated by live encoding servers. Often, live encoding servers utilize the MPEG-DASH format (i.e., Dynamic Adaptive Streaming over HTTP). MPEG-DASH (ISO/IEC 23009-1) is a standard for streaming multimedia content over the internet. MPEG-DASH was developed by the Moving Picture Expert Group (MPEG). MPEG has been responsible for developing previous multimedia standards, including MPEG-2, MPEG-4, MPEG-7, MPEG-21 and others. MPEG-DASH, is an adaptive bitrate streaming technique that enables high quality streaming of media content over the Internet delivered from conventional HTTP web servers. Typically, MPEG-DASH uses sequences of small files that each contain a segment of video that are retrieved via Hypertext Transfer Protocol (HTTP), each segment containing a short interval of playback time of a presentation. Presentations can be can live events and/or have specified durations. The adaptive bitrate streams can be made available at a variety of different bit rates, such as 300 kb/s, 500 kb/s, and 3 MB/s. Live encoding and/or transcoding of source streams into multiple adaptive bitrate streams can require substantial computing resources and live encoding hardware is fairly expensive.
Turning now the drawings, live encoding systems in accordance with embodiments of the invention are illustrated. In several embodiments, the live encoding systems receive live media feeds such as (but not limited to) sporting events, live news coverage, web live streams, and/or singular or multiplexed streams of media. Streams of media contain multimedia that is constantly received by and presented to a client while being delivered by a provider. Streaming refers to the process of delivering media via streams. Live encoding systems can provide streams of media to clients encoded from a live input stream. Moreover, live encoding systems can encode received live media feeds into several different adaptive bitrate streams having different maximum bitrates. The live encoding systems can further transmit the encoded adaptive bitrate streams in live media presentations to streaming clients via protocols including (but not limited to) HTTP requests and/or provide the encoded adaptive bitrate streams to servers for distribution to client devices. Encoding and transmission of live media presentations can be taxing on the hardware used to perform these operations. Embodiments of the invention provide for several techniques to reduce the load on hardware performing live encoding and transmission operations. For instance, live encoding systems in accordance with many embodiments of the invention can assess network and/or server load levels according to several measures. Load is often measured as an amount of work (e.g., computations, encoding operations, memory operations, etc.) a live encoding system is performing. Based on the assessments, the live encoding systems can adjust how frames of video from live media feeds are being encoded. For instance, some embodiments of the live encoding systems replicate a current encoded frame instead of re-encoding said current frame, and then adjust the replicated frame to different bitrates, resolutions, and/or contexts as necessary for the several different adaptive bitrate streams. In addition, various embodiments of the live encoding systems can extend a duration of a current frame being repackaged and/or re-encoded. Utilizing these and other techniques, live encoding systems in accordance with embodiments of the invention can more efficiently handle gaps in received data, slower feeding of data, and/or heavy loads on server hardware.
Network transmission levels can affect live encoding processes. For instance, when a live media feed suffers interruptions in network transmission levels from the live input stream to the live encoding system, the live encoding system may encounter a gap in incoming data. Gaps in incoming data can produce gaps in output data and/or result in the live encoding system failing to deliver output frames when requested. Live encoding systems in accordance with some embodiments of the invention can assess incoming media feeds to determine when gaps have occurred. These assessments can be based on several measures including (but not limited to) incoming frame rate, incoming bit rates, time between arrived frames, and/or network bandwidth measurements. Live encoding systems in accordance with many embodiments of the invention can compensate for detected gaps in data by replicating frames and/or extending frames during repackaging of incoming media streams into several adaptive bitrate streams. By replicating frames and/or extending frames, the live encoding systems can allow network conditions a chance to stabilize without jeopardizing the availability of frames at the requested time that clients depend on. Specifically, the live encoding system can fall behind the live edge of live streamed media. Clients typically request frames from a live stream at the live edge of the presentation. When used herein, the term “live edge” refers to the most recently encoded segments of the live stream that clients can request without the risk of requesting segments that are not yet available. Requesting not yet available segments result numerous streaming errors such as (but not limited) delays, HTTP not found errors, and can result in bandwidth-clogging repeated requests.
Server load levels can affect live encoding processes as well. Where a live encoding system is implemented as a live encoding server, the server hardware can become overwhelmed by encoding processes. Where a live encoding server falls behind the live edge, the several adaptive bitrate streams can fail as the clients rely on requests being made at the live edge. Specifically, live streaming clients can request segments of video based on an assumption that live encoding systems generate the segments not slower than real time. Live encoding systems in accordance with many embodiments of the invention can compensate for server load by extending current frames and adjusting timestamps of output frames. The extended frames can produce minor and/or difficult to perceive visual errors but will preserve the request and receive HTTP cycle clients depend on for live streaming. Moreover, live encoding systems in accordance with embodiments of the invention can also compensate for server load by replicated current frames and adjusting their frame contexts as necessary for the output streams.
Having discussed a brief overview of the operations and functionalities live encoding systems in accordance with many embodiments of the invention, a more detailed discussion of systems, servers, and methods for live encoding systems in accordance with embodiments of the invention follows below.
Network Architectures for Live Encoding Systems
A network architecture for a live encoding system in accordance with an embodiment of the invention is illustrated in
The live encoding servers and supporting hardware 102 can communicate over network 104 with several groups of devices in order to provide streams of content. The groups of devices include (but are not limited to) web, file, and/or Media Servers 106, computing devices 108, and/or mobile devices 112. Users of the devices from these groups of devices can view provided streaming content utilizing local streaming clients. In addition, a web server from web, file, and/or Media Servers 106 can also serve as hosts for additional downstream viewers and/or clients of the provided streaming content.
As illustrated in
In the embodiment illustrated in
Although a specific architecture is shown in
Systems And Processes for Live Encoding Servers
In live encoding systems, clients often rely on being able to request and receive frames at the live encoding edge. Any interruptions in encoding and/or transmission can result in clients failing to received needed frames, failed HTTP requests, image stuttering, and general frustration by the viewers. Live encoding systems in accordance with numerous embodiments of the invention can use real time analysis of incoming media and/or encoding system loads to mitigate losses and interruptions in live encoding through techniques discussed below.
Media can be received (210). As mentioned above, media can encompass numerous different types, formats, standards, and/or presentations. Often, the received media is a live feed of already encoded media. The received media can include (but not limited to) input streams, live media feeds, television feeds, satellite feeds, web streams, and/or static files received from local and/or remote storages.
Streams can be generated (220) from the received media. The generated streams can be of many possible formats, such as (but not limited to) MPEG-DASH, H.264/AVC, HTTP Live Streaming, Smooth Streaming, and/or any other adaptive bitrate format. The generated streams can then be provided to streaming clients over a network connection. Typically, the generated streams will be of different maximum bitrates and be encoded according to varying encoding parameters. In some embodiments, streams are generated utilizing a repackaging application of a live encoding server. The repackaging application repackages received media into output streams. Thereby, the repackaging application can utilize utilizing various encoders and decoders as necessary to generate as necessary to generate the streams.
The generation of streams can be a continuous process that is performed as live media is received. During continuous generation of streams in response to receipt of live media, load levels on the live encoding system, load levels in a communication network, gaps in receipt of media, and/or gaps in generation of streams can be assessed (230). Moreover, different embodiments may assess other aspects of live encoding server operations. Performing said assessments can include several sub-operations. For instance, the live encoding system can check incoming data rates and/or frame rates of the received media. The incoming data rates and/or frame rates of the received media can be compared to frame times determined according to internal logic of the live encoding system. The internal logic can include several sources of determining a reliable time, such as (but not limited to) time stamps of the received media, clock implementations on the live encoding system, and/or the declared frame rate of the received media. In some embodiments, the live encoding systems can measure differences in times between incoming frames in order to calculate an overall incoming data rate. The live encoding systems can then monitor the calculated overall incoming data rate to identify gaps in incoming data or potential surges that may overwhelm the processing power of the live encoding system. One or more of these assessments can indicate that the live encoding system has not received a frame at a proper time and/or will fail to encode a frame in time to meet the live edge requirement for live encoding systems.
In order to mitigate the risk of failing to generate frames in time for the live edge, frames of received media can optionally be duplicated and/or replicated (240). In some embodiments, the duplicated frames can be modified to account for new frame contexts associated with the various generated streams. Different frame contexts can include (but are not limited to) different resolutions, different frames types (such as I-frames, B-frames, and/or P-frames), different maximum bitrates. Generation of streams from received media often involves re-encoding the received media to a different format where the received media includes encoded frames. Re-encoding of the received media can be among the more resource intensive operations performed by live encoding systems. The duplicated frames can then be utilized in the generated streams without a relatively costly re-encoding operation. Moreover, the duplicated frames can also be duplicated from raw frames from the received media in addition to encoded frames from the received media.
However, replicating encoded frames instead of re-encoding the frames as a part of a live encoding process can result in the output streams violating certain requirements of the hypothetical reference decoder (HRD) in H.264/AVC. By definition, the HRD shall not overflow nor underflow when its input is a compliant stream. Replicating a large encoded frame and utilizing the replicated stream in a low maximum bitrate stream risks causing a buffer overflow that would fail the HRD requirements. However, software decoder clients can compensate for this without a problem due to their more flexible buffers. The software decoder clients will can require additional CPU cycles to process the replicated frames. Hardware decoder clients will encounter errors due to possible buffer overflows when replicated frames are used in lower maximum bitrate streams. Some embodiments of the invention provide for reducing the bit values of replicated frames for lower maximum bitrate output streams in order to mitigate against the risk of buffer overflows in hardware decoders. In yet other embodiments, duplicated frames are only used for their own specific maximum bitrate output streams; thereby preventing high bit value frames from being utilized low maximum bitrate streams. This can be accomplished by including separate encoding processes for each output stream.
Moreover, in some embodiments, frames can be replicated and/or duplicated from input streams where the input stream and the output stream share same formats, maximum bitrates, and/or resolutions. This can occur where the desired output stream is the same as the input stream. Where this occurs, re-encoding can be skipped and several embodiments can simply replicated the instantaneous decoding refreshes (IDR) frames from the input streams. As discussed above, the resulting output stream can be non-HRD compliant in said several embodiments.
In a further technique to mitigate the risk of failing to generate frames in time for the live edge, frames of received media can optionally be extended (250). Extending frames can include packaging a given frame into an output stream at times different than the given frame's assigned time stamp. Depending on previous assessments, different extensions of frames may occur. Where a gap is detected in feeding and/or receiving of media, a current frame may be extended in generation of the output streams. In embodiments utilizing a repackaging application as a part of a live encoding server, the repackaging application can perform the extension during repackaging of frames into output streams. In order to reduce visual artifacts and/or perceptual stalls in video, the repackaging application can spread several smaller frame extensions over multiple frames in order to compensate for the gap in multiple steps. The smaller extensions can serve to conceal the extensions from streaming client viewers.
The generated output streams can be provided (260) to streaming clients. The generated output streams can be at different maximum bitrate yet each represent a single media presentation. Thus, a given media presentation can be provided to streaming clients in several streams having different maximum bitrates. The provision of generated output streams can be accomplished via HTTP requests for segments from the generated output streams.
While the operations presented in process 200 are presented in a linear order, various embodiments can perform said operations in varying orders. For instance, the generation and provision of streams to clients can be performed continuously as live media is received. Thus, the order of operations presented in process 200 is merely demonstrative and can be performed continuously as a part of a cyclical process for live generation of streams from frames of received media. Having discussed an overview of processes performed by live encoding systems of some embodiments, the following discussion will provide several examples of frame extension and frame replication that can performed as a part of said processes.
Examples of Frame Extension and Frame Replication
As discussed above, live encoding systems in accordance with embodiments of the invention can extend frames and/or replicate frames in response to assessed network and/or server conditions. Frame extensions and/or frame replications can compensate for dropped input frames, delayed input frames, and/or encoding system load.
As shown, input stream input stream 310 includes several frames with identified time stamps and durations. The frames can include portions of media, such as frames video. Time stamps are indicated by the abbreviation “TS”. Durations are indicated by the abbreviation “D”. As mentioned previously, the values shown in
Live encoding system 300 expects to receive frames from input stream 310 at specified times. When frames are not received at the specified times, live encoding system 300 may not be able to generate the output stream 360 in time for the live edge expected by live streaming clients. Live encoding system 300 can assess whether frames are missing from the input stream 310 using a variety of measures as discussed above. Such as comparing internal clocks maintained by the live encoding system 300 to the time stamps of the received frames of the live input stream 310. Live encoding system 310 can also include thresholds for missing frames that must be met before extending frames. Live encoding system 310 includes a threshold of two missing frames before electing to extending frames to compensate for the at least two frame gap. Different embodiments may include different thresholds that can be based on a different number of frames and/or a different threshold measurement, such as missing frames over a segment of time instead of missing frames in sequence. Live encoding of video is inherently a resource intensive process, thus various embodiments can utilize a variety of thresholds in connection with assessing encoding conditions, such encoding system loads, client stuttering, network bandwidth stability, video quality, and other metrics and/or conditions that can affect live encoding of video. As discussed above, specific counts of frames and their delivery can be calculated and compared to different thresholds of frame counts and times in different embodiments of the invention. Furthermore, different embodiments can use different metrics for assessing such streaming conditions, processing cycle counts, time benchmarks for encoding of sets of frames, network transfer rates, delivered and displayed framerates, and various measurements of visual quality/fidelity. While specific values are not provided herein, different specific values (such as dips below 24 frames per second, visual errors causing display failures in excess of certain gamma values, frames encoded per second, etc.) can be utilized as necessary to implement the invention without departing from the spirit of the invention.
Input frames can go missing under a variety of different circumstances, such (but not limited to) when there is a failure in the network connection between the provider of the input stream and the live encoding system, when there is fault in the input stream, and/or internal errors of the live encoding system. As shown, input stream 310 is missing frames 330 and frames 340. Live encoding system 300 can detect this gap by comparing the time stamp of frame 8350 to the time stamp of frame 5320 and an internal clock maintained by live encoding system 300. Once the missing frame threshold is met, live encoding system 300 can extend frames to compensate for the gap in frames. Various embodiments can use different thresholding schemes, including any of those discussed above.
As shown, live encoding system 300 extends frame 5320 from the input stream 310 in generating output stream 360. Extended frame 370 is extended to have a duration value equal to 3 in order to cover the missing frames 330 and 340. Extended frame 370 will be available when requested by live streaming clients and preserves the live edge required to support uninterrupted live streaming. However, extending frame durations can result in visual artifacts if used excessively.
Embodiments of the invention are not limited to the frame extensions techniques discussed above with respect to
Live encoding servers typically are very powerful and expensive machines that need significant computing power to encoding live streams that meet the live edge requirement. However, even powerful servers can become overloaded and lesser servers even more so. In particular, re-encoding encoded frames can be a serious drain on server resources.
As shown, live encoding system 700 receives encoded frame 4720 and encoded frame 5730. Live encoding system 700 replicates these frames in generating encoded output stream 750. Frame fields for replicated frame 4760 and replicated frame 5770 may have to be adjusted in order to account for the new frame context. However, these adjustments can require significantly less processing resources as compared to re-encoding operations. Replicated frame 4760 and replicated frame 5770 have the same duration values and time stamp values as encoded frame 4720 and encoded frame 5730.
Embodiments of the invention are not limited to the specific frame replication techniques discussed above in the example conceptually illustrated in
MPEG-Dash Live Encoding
MPEG-DASH (ISO/IEC 23009-1) is a standard for streaming multimedia content over the internet. MPEG-DASH was developed by the Moving Picture Expert Group (MPEG). MPEG has been responsible for developing previous multimedia standards, including MPEG-2, MPEG-4, MPEG-7, MPEG-21 and others. MPEG-DASH provides for adaptive segmented media delivery using HTTP. The MPEG-DASH specification only defines the MPD and the segment formats. Of note, the delivery of the MPD and the media-encoding formats containing the segments, as well as the client behavior for fetching, adaptation heuristics, and playing content, are undefined within the MPEG-DASH standard.
As shown, live encoding system 820 is receiving media feed data 810. Media feed data 810 can include at least the types of received media discussed above. Live encoding system 820 can generate output streams from the received media feed data 810. During generation of the output streams from the received media feed data 810, live encoding system 820 can replicate frames from the media feed data 810 and/or extend frames from the media feed data 810 based on assessments of the rate of receipt of media feed data 810, load levels on the live encoding system 820, load levels in the communication network supporting the transmission of media feed data 810, gaps in the media feed data 810, and/or gaps in generation of streams by the live encoding system 820.
Live encoding system 820 also receives HTTP requests 830. In response to the HTTP requests, live encoding system 820 provides requested stream segments 840. HTTP requests 830 can include byte range requests for a specific segment from one of the generated output streams. Live encoding system 820 can include multiple components, including separate live encoding servers and HTTP servers. The HTTP servers can support the HTTP communication of media segments and requests with clients. Moreover, the HTTP servers can utilize HTTP-based Content Distribution Networks (CDNs) to assist in delivery of media segments to streaming client 850.
MPEG-DASH uses a Media Presentation Description (MPD) to provide clients with a well structured XML manifest describing several adaptive bitrate streams that can be accessed via HTTP requests for stream segments. Each MPD corresponds to a single media presentation that can be viewed via the several described adaptive bitrate streams. The MPD describes accessible media segments and corresponding timings for the accessible media segments. The MPD is a hierarchical data model including (descending from the top of the hierarchy) a media presentation, periods, adaptation sets, representations, and segments. A media presentation can include to a live broadcast, a live stream, a live event, and/or a pre-recorded media presentation. A media presentation can be spliced and/or include several periods. The periods are by default unlinked and can have advertising periods spliced between them without any loss of functionality. Periods can include several adaptation sets. Adaptation sets can include different perspectives on the same presentation, such as different cameras from a live sporting event. In addition, different adaptation sets can include different formats, such as audio adaptation sets and video adaptation sets. Within each adaptation set, several representations may be included. Representations support the selection of different bandwidth and/or maximum bitrate levels form the same presentation. Thus, clients of MPEG-DASH can use adaptive bitrate streaming by switching to different representations as bandwidth and/or client loading allows. Each representation includes segments of media that can be requested via HTTP. The HTTP requests are received on pre-formatted URLs associated with each segment.
Of note, instances of ellipses illustrated in
Live Encoding Server Architecture
An architecture of a live encoding server 1000 in accordance with an embodiment of the invention is illustrated in
The input data handling application 1050 receives input streams from the network interface 1040. The input streams can include (but are not limited to) live streams of video content, media presentations, video only files, audio only files, sporting events, web streams, and/or mpeg-dash standard streams. The input data handling application 1050 can perform additional functions including identification of the input streams. Identification can be performed using metadata included with the input streams and/or assessing of characteristics and parameters of the input streams.
The demuxer application 1055 demultiplexes individual elementary streams from an input stream. For instance, the demuxer application 1055 can break out the audio, video, and/or subtitle streams within an input stream. The demultiplexed streams can be analyzed, decoded, and reencoded in subsequent operations performed by other applications.
The repackager application 1060 can perform the re-encoding, duplication, and frame extension operations as a part of the overall live encoding server operations. The repackager application 1060 can receive input streams from the input data handling application 1050, the demuxer application 1055, the network interface 1040, and/or any other component of the live encoding server 1000 as necessary to repackage streams. The repackager application 1060 can re-encode incoming live frames of received media into several output streams utilizing the video decoder application 1090 and the video encoder application 1095 as necessary. During re-encoding operations, the repackager application 1060 can assess network and/or server load levels of the live encoding server 1000 according to several measures. Based on these assessments, the repackager application 1060 can duplicate incoming frames to reduce server load levels and/or extend certain frames to compensate for anticipated drops in incoming network bandwidth. The repackager application 1060 can extend frames by manipulating time codes and/or time stamps of frames to increase their duration in output streams. The repackager application 1060 can provide the repackaged, re-encoded, duplicated, and/or extended frames of output streams to the MPD combination application 1065 and/or the MPD generation application 1070 for preparation for later streaming to clients utilizing the HTTP request application 1075.
The MPD combination application 1065 combines multiple output streams generated by the repackager application 1060 into a single presentation. The MPD combination application 1070 can generate an MPD file for a combined presentation. As discussed above, the MPD file can describe the periods, adaptation sets, representations, and segments of a media presentation. The MPD combination application 1070 generates MPD's according to characteristics of the generated output streams. These characteristics will vary according to the operations performed by the repackager application 1060. The MPD file is typically the initially requested and provided to streaming clients in order to initiate an mpeg-dash streaming session.
The HTTP request application 1075 handles HTTP requests and server media segments according to said HTTP requests. The HTTP request application 1075 may communicate to streaming clients through the network interface 1040. In some embodiments, the HTTP request application 1075 is hosted in a separate HTTP server from the live encoding server.
The non-volatile memory includes audio decoder application 1080, audio encoder application 1085, video decoder application 1090, and video encoder application 1095. While non-volatile memory 1030 only includes a single video decoder application 1090 and a single video encoder application 1095, other embodiments may include multiple video encoder and video decoder applications. Moreover, some embodiments may utilize sets of applications for each output stream in order to have separate repackager, decorder, and encoder applications to generate each different output stream.
In several embodiments, the network interface 1040 may be in communication with the processor 1010, the volatile memory 1020, and/or the non-volatile memory 1030. The above discussion of the applications stored in the non-volatile memory 1030 of the live encoding server 1000 discusses one exemplary set of applications to support the live encoding server 1000. Other embodiments of the invention may utilize multiple servers with the functions discussed below distributed across multiple servers and/or locations as necessary to implement the invention. Furthermore, the applications discussed below could be combined into one or more applications and implemented as software modules as necessary to implement the invention. For instance, the applications discussed below could alternatively be implemented as modules of a single application residing on live encoding server 1000. Moreover, where a single application is shown, other embodiments may utilize multiple applications dedicated to similar functions.
The various processes discussed above can be implemented on singular, discrete servers. Alternatively, they can each be implemented as shared and/or discrete servers on any number of physical, virtual, or cloud computing devices. Specifically, live encoding systems in accordance with some embodiments of the invention could include separate encoding server(s) and HTTP server(s). Persons of ordinary skill in the art will recognize that various implementations methods may be used to implement the process servers of embodiments of the invention.
While the above description contains many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as an example of one embodiment thereof. Accordingly, the scope of the invention should be determined not by the embodiments illustrated, but by the appended claims and their equivalents.
The present application is a continuation of U.S. patent application Ser. No. 15/055,467 entitled “Systems and Methods for Frame Duplication and Frame Extension in Live Video Encoding and Streaming” filed Feb. 26, 2016, which application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Patent Application Ser. No. 62/126,393 entitled “Systems and Methods for Frame Duplication and Frame Extension in Live Video Encoding and Streaming” filed Feb. 27, 2015, the disclosures of which are hereby incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5400401 | Wasilewski et al. | Mar 1995 | A |
5574785 | Ueno et al. | Nov 1996 | A |
5600721 | Kitazato | Feb 1997 | A |
5621794 | Matsuda et al. | Apr 1997 | A |
5642338 | Fukushima et al. | Jun 1997 | A |
5805700 | Nardone et al. | Sep 1998 | A |
5813010 | Kurano et al. | Sep 1998 | A |
5854873 | Mori et al. | Dec 1998 | A |
5907658 | Murase et al. | May 1999 | A |
5923869 | Kashiwagi et al. | Jul 1999 | A |
6002834 | Hirabayashi et al. | Dec 1999 | A |
6009237 | Hirabayashi et al. | Dec 1999 | A |
6016381 | Taira et al. | Jan 2000 | A |
6057832 | Lev et al. | May 2000 | A |
6065050 | DeMoney | May 2000 | A |
6157391 | Johnson | Dec 2000 | A |
6266483 | Okada et al. | Jul 2001 | B1 |
6282320 | Hasegawa et al. | Aug 2001 | B1 |
6320905 | Konstantinides | Nov 2001 | B1 |
6351538 | Uz | Feb 2002 | B1 |
6373803 | Ando et al. | Apr 2002 | B2 |
6415031 | Colligan et al. | Jul 2002 | B1 |
6445877 | Okada et al. | Sep 2002 | B1 |
6453115 | Boyle | Sep 2002 | B1 |
6453116 | Ando et al. | Sep 2002 | B1 |
6504873 | Vehvilaeinen | Jan 2003 | B1 |
6512883 | Shim et al. | Jan 2003 | B2 |
6594699 | Sahai et al. | Jul 2003 | B1 |
6654933 | Abbott et al. | Nov 2003 | B1 |
6671408 | Kaku | Dec 2003 | B1 |
6690838 | Zhou | Feb 2004 | B2 |
6724944 | Kalevo et al. | Apr 2004 | B1 |
6751623 | Basso et al. | Jun 2004 | B1 |
6813437 | Ando et al. | Nov 2004 | B2 |
6871006 | Oguz et al. | Mar 2005 | B1 |
6912513 | Candelore | Jun 2005 | B1 |
6928603 | Castagna et al. | Aug 2005 | B1 |
6931531 | Takahashi | Aug 2005 | B1 |
6957350 | Demos | Oct 2005 | B1 |
6970564 | Kubota et al. | Nov 2005 | B1 |
6983079 | Kim | Jan 2006 | B2 |
7006757 | Ando et al. | Feb 2006 | B2 |
7007170 | Morten | Feb 2006 | B2 |
7020287 | Unger | Mar 2006 | B2 |
7151832 | Fetkovich et al. | Dec 2006 | B1 |
7188183 | Paul et al. | Mar 2007 | B1 |
7212726 | Zetts | May 2007 | B2 |
7242772 | Tehranchi | Jul 2007 | B1 |
7274861 | Yahata et al. | Sep 2007 | B2 |
7295673 | Grab et al. | Nov 2007 | B2 |
7349886 | Morten et al. | Mar 2008 | B2 |
7352956 | Winter et al. | Apr 2008 | B1 |
7382879 | Miller | Jun 2008 | B1 |
7397853 | Kwon et al. | Jul 2008 | B2 |
7400679 | Kwon et al. | Jul 2008 | B2 |
7418132 | Hoshuyama | Aug 2008 | B2 |
7457415 | Reitmeier et al. | Nov 2008 | B2 |
7499930 | Naka et al. | Mar 2009 | B2 |
7546641 | Robert et al. | Jun 2009 | B2 |
7639921 | Seo et al. | Dec 2009 | B2 |
7640435 | Morten | Dec 2009 | B2 |
7711052 | Hannuksela et al. | May 2010 | B2 |
7853980 | Pedlow, Jr. et al. | Dec 2010 | B2 |
7864186 | Robotham et al. | Jan 2011 | B2 |
7945143 | Yahata et al. | May 2011 | B2 |
8131875 | Chen | Mar 2012 | B1 |
8169916 | Pai et al. | May 2012 | B1 |
8243924 | Chen et al. | Aug 2012 | B2 |
8286213 | Seo | Oct 2012 | B2 |
8312079 | Newsome et al. | Nov 2012 | B2 |
8369421 | Kadono et al. | Feb 2013 | B2 |
8532171 | Narayanan et al. | Sep 2013 | B1 |
8649669 | Braness et al. | Feb 2014 | B2 |
8683066 | Hurst et al. | Mar 2014 | B2 |
8782268 | Pyle et al. | Jul 2014 | B2 |
8819116 | Tomay et al. | Aug 2014 | B1 |
8849950 | Stockhammer et al. | Sep 2014 | B2 |
9038116 | Knox et al. | May 2015 | B1 |
9094615 | Aman | Jul 2015 | B2 |
10715574 | Bulava et al. | Jul 2020 | B2 |
20010021276 | Zhou | Sep 2001 | A1 |
20010052077 | Fung et al. | Dec 2001 | A1 |
20010052127 | Seo et al. | Dec 2001 | A1 |
20020048450 | Zetts | Apr 2002 | A1 |
20020067432 | Kondo et al. | Jun 2002 | A1 |
20020135607 | Kato et al. | Sep 2002 | A1 |
20020141503 | Kobayashi et al. | Oct 2002 | A1 |
20020154779 | Asano et al. | Oct 2002 | A1 |
20020164024 | Arakawa et al. | Nov 2002 | A1 |
20020169971 | Asano et al. | Nov 2002 | A1 |
20030002577 | Pinder | Jan 2003 | A1 |
20030044080 | Frishman et al. | Mar 2003 | A1 |
20030053541 | Sun et al. | Mar 2003 | A1 |
20030063675 | Kang et al. | Apr 2003 | A1 |
20030077071 | Lin et al. | Apr 2003 | A1 |
20030135742 | Evans | Jul 2003 | A1 |
20030142594 | Tsumagari et al. | Jul 2003 | A1 |
20030206717 | Yogeshwar et al. | Nov 2003 | A1 |
20040001594 | Krishnaswamy et al. | Jan 2004 | A1 |
20040022391 | Obrien | Feb 2004 | A1 |
20040028227 | Yu | Feb 2004 | A1 |
20040037421 | Truman | Feb 2004 | A1 |
20040047592 | Seo et al. | Mar 2004 | A1 |
20040047607 | Seo et al. | Mar 2004 | A1 |
20040076237 | Kadono et al. | Apr 2004 | A1 |
20040081333 | Grab et al. | Apr 2004 | A1 |
20040093494 | Nishimoto et al. | May 2004 | A1 |
20040101059 | Joch et al. | May 2004 | A1 |
20040107356 | Shamoon et al. | Jun 2004 | A1 |
20050013494 | Srinivasan et al. | Jan 2005 | A1 |
20050063541 | Candelore | Mar 2005 | A1 |
20050076232 | Kawaguchi | Apr 2005 | A1 |
20050144468 | Northcutt | Jun 2005 | A1 |
20050177741 | Chen et al. | Aug 2005 | A1 |
20050243912 | Kwon et al. | Nov 2005 | A1 |
20050265555 | Pippuri | Dec 2005 | A1 |
20060013568 | Rodriguez | Jan 2006 | A1 |
20060165163 | Burazerovic et al. | Jul 2006 | A1 |
20060224768 | Allen | Oct 2006 | A1 |
20070047645 | Takashima | Mar 2007 | A1 |
20070067472 | Maertens et al. | Mar 2007 | A1 |
20070083467 | Lindahl et al. | Apr 2007 | A1 |
20070180051 | Kelly et al. | Aug 2007 | A1 |
20080086570 | Dey et al. | Apr 2008 | A1 |
20080101718 | Yang et al. | May 2008 | A1 |
20080137847 | Candelore et al. | Jun 2008 | A1 |
20090010622 | Yahata et al. | Jan 2009 | A1 |
20090013195 | Ochi et al. | Jan 2009 | A1 |
20090077143 | Macy, Jr. | Mar 2009 | A1 |
20090106082 | Senti et al. | Apr 2009 | A1 |
20090132599 | Soroushian et al. | May 2009 | A1 |
20090178090 | Oztaskent | Jul 2009 | A1 |
20090249081 | Zayas | Oct 2009 | A1 |
20090282162 | Mehrotra et al. | Nov 2009 | A1 |
20090310819 | Hatano | Dec 2009 | A1 |
20100142915 | Mcdermott et al. | Jun 2010 | A1 |
20110010466 | Fan et al. | Jan 2011 | A1 |
20110058675 | Brueck et al. | Mar 2011 | A1 |
20110096828 | Chen et al. | Apr 2011 | A1 |
20110103374 | Lajoie et al. | May 2011 | A1 |
20110105226 | Perlman | May 2011 | A1 |
20110135090 | Chan et al. | Jun 2011 | A1 |
20110145858 | Philpott et al. | Jun 2011 | A1 |
20110173345 | Knox et al. | Jul 2011 | A1 |
20110179185 | Wang et al. | Jul 2011 | A1 |
20110197261 | Dong et al. | Aug 2011 | A1 |
20110206138 | Yang | Aug 2011 | A1 |
20110246657 | Glow | Oct 2011 | A1 |
20110246661 | Manzari et al. | Oct 2011 | A1 |
20110296048 | Knox et al. | Dec 2011 | A1 |
20110314130 | Strasman | Dec 2011 | A1 |
20120005312 | Mcgowan et al. | Jan 2012 | A1 |
20120042090 | Chen et al. | Feb 2012 | A1 |
20120047542 | Lewis et al. | Feb 2012 | A1 |
20120110120 | Willig et al. | May 2012 | A1 |
20120128061 | Labrozzi et al. | May 2012 | A1 |
20120167132 | Mathews et al. | Jun 2012 | A1 |
20120307886 | Agarwal et al. | Dec 2012 | A1 |
20120331106 | Ramamurthy et al. | Dec 2012 | A1 |
20120331167 | Hunt | Dec 2012 | A1 |
20130013803 | Bichot et al. | Jan 2013 | A1 |
20130028088 | Do et al. | Jan 2013 | A1 |
20130080267 | McGowan | Mar 2013 | A1 |
20130272374 | Eswaran et al. | Oct 2013 | A1 |
20130290492 | ElArabawy et al. | Oct 2013 | A1 |
20140003491 | Chen et al. | Jan 2014 | A1 |
20140032777 | Yuan et al. | Jan 2014 | A1 |
20140082146 | Bao et al. | Mar 2014 | A1 |
20140086336 | Wang | Mar 2014 | A1 |
20140140253 | Lohmar et al. | May 2014 | A1 |
20140149557 | Lohmar et al. | May 2014 | A1 |
20140169448 | Wang | Jun 2014 | A1 |
20140195651 | Stockhammer et al. | Jul 2014 | A1 |
20140201329 | Himayat et al. | Jul 2014 | A1 |
20140219088 | Oyman et al. | Aug 2014 | A1 |
20140282766 | Good | Sep 2014 | A1 |
20140351871 | Bomfim et al. | Nov 2014 | A1 |
20140359680 | Shivadas et al. | Dec 2014 | A1 |
20140362690 | Baduge et al. | Dec 2014 | A1 |
20140376623 | Good | Dec 2014 | A1 |
20150131727 | Bakke | May 2015 | A1 |
20150201198 | Marlatt et al. | Jul 2015 | A1 |
20150230002 | Brockmann et al. | Aug 2015 | A1 |
20150244761 | Tsyganok et al. | Aug 2015 | A1 |
20150264096 | Swaminathan et al. | Sep 2015 | A1 |
20150264404 | Hannuksela | Sep 2015 | A1 |
20150288530 | Oyman | Oct 2015 | A1 |
20150312572 | Owen | Oct 2015 | A1 |
20160006817 | Mitic et al. | Jan 2016 | A1 |
20160029047 | Spidella et al. | Jan 2016 | A1 |
20160088050 | Einarsson | Mar 2016 | A1 |
20160105728 | Schmidmer et al. | Apr 2016 | A1 |
20160119657 | Sun | Apr 2016 | A1 |
20160205164 | Schmidt et al. | Jul 2016 | A1 |
20160212439 | Hannuksela | Jul 2016 | A1 |
20160219023 | So | Jul 2016 | A1 |
20160234504 | Good et al. | Aug 2016 | A1 |
20160255131 | Bulava et al. | Sep 2016 | A1 |
20160277781 | Lennon et al. | Sep 2016 | A1 |
20160353148 | Prins et al. | Dec 2016 | A1 |
20170257647 | Iguchi et al. | Sep 2017 | A1 |
20170366597 | Parthasarathy | Dec 2017 | A1 |
20180007395 | Ugur et al. | Jan 2018 | A1 |
Number | Date | Country |
---|---|---|
2237293 | Jul 1997 | CA |
1767652 | May 2006 | CN |
102484706 | May 2012 | CN |
103944675 | Jul 2014 | CN |
104318926 | Jan 2015 | CN |
107251008 | Oct 2017 | CN |
107251008 | Nov 2020 | CN |
1453319 | Sep 2004 | EP |
1283640 | Oct 2006 | EP |
2180664 | Apr 2010 | EP |
2360923 | Aug 2011 | EP |
3262523 | Jan 2018 | EP |
3262523 | Dec 2019 | EP |
1246423 | Aug 2020 | HK |
2015531185 | Oct 2015 | JP |
2019-193312 | Oct 2019 | JP |
20040039852 | May 2004 | KR |
20060106250 | Oct 2006 | KR |
2328040 | Jun 2008 | RU |
2000049762 | Aug 2000 | WO |
2000049763 | Aug 2000 | WO |
2003047262 | Jun 2003 | WO |
2004012378 | Feb 2004 | WO |
2004100158 | Nov 2004 | WO |
2005008385 | Jan 2005 | WO |
2005015935 | Feb 2005 | WO |
2009006302 | Jan 2009 | WO |
2009109976 | Sep 2009 | WO |
2010000910 | Jan 2010 | WO |
2011087449 | Jul 2011 | WO |
2011101371 | Aug 2011 | WO |
2011103364 | Aug 2011 | WO |
2016138493 | Sep 2016 | WO |
Entry |
---|
Information Technology—MPEG Systems Technologies—Part 7: Common Encryption in ISO Base Media File Format Files (ISO/IEC 23001-7), Apr. 2015, 24 pgs. |
International Preliminary Report on Patentability for International Application PCT/US2016/019955, Report issued Aug. 29, 2017, dated Sep. 8, 2017. 6 Pgs. |
ISO/IEC 14496-12 Information technology—Coding of audio-visual objects—Part 12: ISO base media file format, Feb. 2004 (“MPEG-4 Part 12 Standard”), 62 pgs. |
ISO/IEC 14496-12:2008(E) Informational Technology—Coding of Audio-Visual Objects Part 12: ISO Base Media Filea Format, Oct. 2008, 120 pgs. |
ISO/IEC FCD 23001-6 MPEG systems technologies Part 6: Dynamic adaptive streaming over HTTP (DASH), Jan. 28, 2011, 86 pgs. |
Microsoft Corporation, Advanced Systems Format (ASF) Specification, Revision 01.20.03, Dec. 2004, 121 pgs. |
MPEG-DASH presentation at Streaming Media West 2011, Nov. 2011, 14 pgs. |
Pomelo, LLC Tech Memo, Analysis of Netflix's Security Framework for ‘Watch Instantly’ Service, Mar.-Apr. 2009, 18 pgs. |
Server-Side Stream Repackaging (Streaming Video Technologies Panorama, Part 2), Jul. 2011, 15 pgs. |
Text of ISO/IEC 23001-6: Dynamic adaptive streaming over HTTP (DASH), Oct. 2010, 71 pgs. |
Universal Mobile Telecommunications System (UMTS), ETSI TS 126 233 V9.1.0 (Jun. 2011) 3GPP TS 26.233 version 9.1.0 Release 9, 18 pgs. |
Universal Mobile Telecommunications Systems (UMTS); ETSI TS 126 244 V9.4.0 (May 2011) 3GPP TS 26.244 version 9.4.0 Release 9, 58 pgs. |
“Apple HTTP Live Streaming specification”, Aug. 2017, 60 pgs. |
“Data Encryption Decryption using AES Algorithm, Key and Salt with Java Cryptography Extension”, Available at https://www.digizol.com/2009/10/java-encrypt-decrypt-jce-salt.html, Oct. 200, 6 pgs. |
“Delivering Live and On-Demand Smooth Streaming”, Microsoft Silverlight, 2009, 28 pgs. |
“HTTP Based Adaptive Streaming over HSPA”, Apr. 2011, 73 pgs. |
“HTTP Live Streaming”, Mar. 2011, 24 pgs. |
“HTTP Live Streaming”, Sep. 2011, 33 pgs. |
“Information Technology—Coding of Audio Visual Objects—Part 2: Visual”, International Standard, ISO/IEC 14496-2, Third Edition, Jun. 1, 2004, pp. 1-724. (presented in three parts). |
“Java Cryptography Architecture API Specification & Reference”, Available at https://docs.oracle.com/javase/1.5.0/docs/guide/security/CryptoSpec.html, Jul. 25, 2004, 68 pgs. |
“Java Cryptography Extension, javax.crypto.Cipher class”, Available at https://docs.oracle.com/javase/1.5.0/docs/api/javax/crypto/Cipher.html, 2004, 24 pgs. |
“JCE Encryption—Data Encryption Standard (DES) Tutorial”, Available at https://mkyong.com/java/jce-encryption-data-encryption-standard-des-tutorial/, Feb. 25, 2009, 2 pgs. |
“Live and On-Demand Video with Silverlight and IIS Smooth Streaming”, Microsoft Silverlight, Windows Server Internet Information Services 7.0, Feb. 2010, 15 pgs. |
“Microsoft Smooth Streaming specification”, Jul. 22, 2013, 56 pgs. |
“OpenDML AVI File Format Extensions Version 1.02”, OpenDMLAVI MJPEG File Format Subcommittee. Last revision: Feb. 28, 1996. Reformatting: Sep. 1997, 42 pgs. |
“Single-Encode Streaming for Multiple Screen Delivery”, Telestream Wowza Media Systems, 2009, 6 pgs. |
“The MPEG-DASH Standard for Multimedia Streaming Over the Internet”, IEEE MultiMedia, vol. 18, No. 4, 2011, 7 pgs. |
“Windows Media Player 9”, Microsoft, Mar. 23, 2017, 3 pgs. |
Abomhara et al., “Enhancing Selective Encryption for H.264/AVC Using Advanced Encryption Standard”, International Journal of computer Theory and Engineering, Apr. 2010, vol. 2, No. 2, pp. 223-229. |
Alattar et al., A.M. “Improved selective encryption techniques for secure transmission of MPEG video bit-streams”, In Proceedings 1999 International Conference on Image Processing (Cat. 99CH36348), vol. 4, IEEE, 1999, pp. 256-260. |
Antoniou et al., “Adaptive Methods for the Transmission of Video Streams in Wireless Networks”, 2015, 50 pgs. |
Apostolopoulos et al., “Secure Media Streaming and Secure Transcoding”, Multimedia Security Technologies for Digital Rights Management, 2006, 33 pgs. |
Asai et al., “Essential Factors for Full-Interactive VOD Server: Video File System, Disk Scheduling, Network”, Proceedings of Globecom '95, Nov. 14-16, 1995, 6 pgs. |
Beker et al., “Cipher Systems, The Protection of Communications”, 1982, 40 pgs. |
Bocharov et al, “Portable Encoding of Audio-Video Objects, The Protected Interoperable File Format (PIFF)”, Microsoft Corporation, First Edition Sep. 8, 2009, 30 pgs. |
Bulterman et al., “Synchronized Multimedia Integration Language (SMIL 3.0)”, W3C Recommendation, Dec. 1, 2008, https://www.w3.org/TR/2008/REC-SMIL3-20081201/, 321 pgs. (presented in five parts). |
Cahill et al., “Locally Adaptive Deblocking Filter for Low Bit Rate Video”, Proceedings 2000 International Conference on Image Processing, Sep. 10-13, 2000, Vancouver, BC, Canada, 4 pgs. |
Candelore, U.S. Appl. No. 60/372,901, filed Apr. 17, 2002, 5 pgs. |
Chaddha et al., “A Frame-work for Live Multicast of Video Streams over the Internet”, Proceedings of 3rd IEEE International Conference on Image Processing, Sep. 19, 1996, Lausanne, Switzerland, 4 pgs. |
Cheng, “Partial Encryption for Image and Video Communication”, Thesis, Fall 1998, 95 pgs. |
Cheng et al., “Partial encryption of compressed images and videos”, IEEE Transactions on Signal Processing, vol. 48, No. 8, Aug. 2000, 33 pgs. |
Cheung et al., “On the Use of Destination Set Grouping to Improve Fairness in Multicast Video Distribution”, Proceedings of IEEE INFOCOM'96, Conference on Computer Communications, vol. 2, IEEE, 1996, 23 pgs. |
Collet, “Delivering Protected Content, An Approach for Next Generation Mobile Technologies”, Thesis, 2010, 84 pgs. |
Diamantis et al., “Real Time Video Distribution using Publication through a Database”, Proceedings SIBGRAPI'98. International Symposium on Computer Graphics, Image Processing, and Vision (Cat. No. 98EX237), Oct. 1990, 8 pgs. |
Dworkin, “Recommendation for Block Cipher Modes of Operation: Methods and Techniques”, NIST Special Publication 800-38A, 2001, 66 pgs. |
Fang et al., “Real-time deblocking filter for MPEG-4 systems”, Asia-Pacific Conference on Circuits and Systems, Oct. 28-31, 2002, Bail, Indonesia, 4 pgs. |
Fecheyr-Lippens, “A Review of HTTP Live Streaming”, Jan. 2010, 38 pgs. |
Fielding et al., “Hypertext Transfer Protocol—HTTP1.1”, Network Working Group, RFC 2616, Jun. 1999, 114 pgs. |
Fukuda et al., “Reduction of Blocking Artifacts by Adaptive DCT Coefficient Estimation in Block-Based Video Coding”, Proceedings 2000 International Conference on Image Processing, Sep. 10-13, 2000, Vancouver, BC, Canada, 4 pgs. |
Huang, U.S. Pat. No. 7,729,426, U.S. Appl. No. 11/230,794, filed Sep. 20, 2005, 143 pgs. |
Huang et al., “Adaptive MLP post-processing for block-based coded images”, IEEE Proceedings—Vision, Image and Signal Processing, vol. 147, No. 5, Oct. 2000, pp. 463-473. |
Huang et al., “Architecture Design for Deblocking Filter in H.264/JVT/AVC”, 2003 International Conference on Multimedia and Expo., Jul. 6-9, 2003, Baltimore, MD, 4 pgs. |
Jain et al., U.S. Appl. No. 61/522,623, filed Aug. 11, 2011, 44 pgs. |
Jung et al., “Design and Implementation of an Enhanced Personal Video Recorder for DTV”, IEEE Transactions on Consumer Electronics, vol. 47, No. 4, Nov. 2001, 6 pgs. |
Kalva, Hari “Delivering MPEG-4 Based Audio-Visual Services”, 2001, 113 pgs. |
Kang et al., “Access Emulation and Buffering Techniques for Steaming of Non-Stream Format Video Files”, IEEE Transactions on Consumer Electronics, vol. 43, No. 3, Aug. 2001, 7 pgs. |
Kim et al, “A Deblocking Filter with Two Separate Modes in Block-based Video Coding”, IEEE transactions on circuits and systems for video technology, vol. 9, No. 1, 1999, pp. 156-160. |
Kim et al., “Tree-Based Group Key Agreement”, Feb. 2004, 37 pgs. |
Laukens, “Adaptive Streaming—A Brief Tutorial”, EBU Technical Review, 2011, 6 pgs. |
Legault et al., “Professional Video Under 32-bit Windows Operating Systems”, SMPTE Journal, vol. 105, No. 12, Dec. 1996, 10 pgs. |
Li et al., “Layered Video Multicast with Retransmission (LVMR): Evaluation of Hierarchical Rate Control”, Proceedings of IEEE Infocom'98, the Conference on Computer Communications. Seventeenth Annual Joint Conference of the IEEE Computer and Communications Societies. Gateway to the 21st Century, Cat. No. 98, vol. 3, 1998, 26 pgs. |
List et al., “Adaptive deblocking filter”, IEEE transactions on circuits and systems for video technology, vol. 13, No. 7, Jul. 2003, pp. 614-619. |
Massoudi et al., “Overview on Selective Encryption of Image and Video: Challenges and Perspectives”, EURASIP Journal on Information Security, Nov. 2008, 18 pgs. |
McCanne et al., “Receiver-driven Layered Multicast”, Conference proceedings on Applications, technologies, architectures, and protocols for computer communications, Aug. 1996, 14 pgs. |
Meier, “Reduction of Blocking Artifacts in Image and Video Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 3, Apr. 1999, pp. 490-500. |
Nelson, “Smooth Streaming Deployment Guide”, Microsoft Expression Encoder, Aug. 2010, 66 pgs. |
Newton et al., “Preserving Privacy by De-identifying Facial Images”, Carnegie Mellon University School of Computer Science, Technical Report, CMU-CS-03-119, Mar. 2003, 26 pgs. |
O'Brien, U.S. Appl. No. 60/399,846, filed Jul. 30, 2002, 27 pgs. |
O'Rourke, “Improved Image Decompression for Reduced Transform Coding Artifacts”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 5, No. 6, Dec. 1995, pp. 490-499. |
Park et al., “A postprocessing method for reducing quantization effects in low bit-rate moving picture coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 9, No. 1, Feb. 1999, pp. 161-171. |
Richardson, “H.264 and MPEG-4 Video Compression”, Wiley, 2003, 306 pgs. (presented in 2 parts). |
Sima et al., “An Efficient Architecture for Adaptive Deblocking Filter of H.264 AVC Video Coding”, IEEE Transactions on Consumer Electronics, vol. 50, No. 1, Feb. 2004, pp. 292-296. |
Spanos et al., “Performance Study of a Selective Encryption Scheme for the Security of Networked, Real-Time Video”, Proceedings of the Fourth International Conference on Computer Communications and Networks, IC3N'95, Sep. 20-23, 1995, Las Vegas, NV, pp. 2-10. |
Srinivasan et al., “Windows Media Video 9: overview and applications”, Signal Processing: Image Communication, 2004, 25 pgs. |
Stockhammer, “Dynamic Adaptive Streaming over HTTP—Standards and Design Principles”, Proceedings of the second annual ACM conference on Multimedia, Feb. 2011, pp. 133-145. |
Timmerer et al., “HTTP Streaming of MPEG Media”, Proceedings of Streaming Day, 2010, 4 pgs. |
Tiphaigne et al., “A Video Package for Torch”, Jun. 2004, 46 pgs. |
Trappe et al., “Key Management and Distribution for Secure Multimedia Multicast”, IEEE Transaction on Multimedia, vol. 5, No. 4, Dec. 2003, pp. 544-557. |
Van Deursen et al., “On Media Delivery Protocols in the Web”, 2010 IEEE International Conference on Multimedia and Expo, Jul. 19-23, 2010, 6 pgs. |
Ventura, Guillermo Albaida “Streaming of Multimedia Learning Objects”, AG Integrated Communication System, Mar. 2003, 101 pgs. |
Waggoner, “Compression for Great Digital Video”, 2002, 184 pgs. |
Watanabem et al., “MPEG-2 decoder enables DTV trick plays”, esearcher System LSI Development Lab, Fujitsu Laboratories Ltd., Kawasaki, Japan, Jun. 2001, 2 pgs. |
Wiegand, “Joint Video Team (JVT) of ISO/IEC MPEG and ITU-T VCEG”, Jan. 2002, 70 pgs. |
Willig et al., U.S. Appl. No. 61/409,285, filed Nov. 2, 2010, 43 pgs. |
Yang et al., “Projection-Based Spatially Adaptive Reconstruction of Block-Transform Compressed Images”, IEEE Transactions on Image Processing, vol. 4, No. 7, Jul. 1995, pp. 896-908. |
Yang et al., “Regularized Reconstruction to Reduce Blocking Artifacts of Block Discrete Cosine Transform Compressed Images”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 6, Dec. 1993, pp. 421-432. |
Yu et al., “Video deblocking with fine-grained scalable complexity for embedded mobile computing”, Proceedings 7th International Conference on Signal Processing, Aug. 31-Sep. 4, 2004, pp. 1173-1178. |
Zakhor, “Iterative Procedures for Reduction of Blocking Effects in Transform Image Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 2, No. 1, Mar. 1992, pp. 91-95. |
Number | Date | Country | |
---|---|---|---|
20200344284 A1 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
62126393 | Feb 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15055467 | Feb 2016 | US |
Child | 16926207 | US |