The present invention relates in general to encoding and decoding video data.
An increasing number of applications today make use of digital video for various purposes including, for example, remote business meetings via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. As technology is evolving, users have higher expectations for video quality and expect high resolution video even when transmitted over communications channels having limited bandwidth.
To permit higher quality transmission of video while limiting bandwidth consumption, a number of video compression schemes are noted including formats such as VPx, promulgated by Google Inc. of Mountain View, Calif., and H.264, a standard promulgated by ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), including present and future versions thereof. H.264 is also known as MPEG-4 Part 10 or MPEG-4 AVC (formally, ISO/IEC 14496-10).
In multipoint video transmission in applications such as video-on-demand and video conferencing, several endpoints need to access the same video stream simultaneously. There is often a wide range of available bandwidth capacities between the endpoint transmitting and the different endpoints receiving the video stream. In some systems, simulcasting has been utilized to transmit the same video stream to two or more endpoints having different bandwidth capacities. In simulcasting, for example, the same video stream is encoded at different qualities and each endpoint receives one of the different quality video streams depending on the endpoint's respective bandwidth capacity. Since these encoded video streams are generated from the same video stream, the video stream may not be encoded and transmitted in a manner that efficiently exploits common data shared in the resulting encoded video streams.
A method of transmitting a video bitstream to a first and at least a second endpoint with varying bandwidth capacities is disclosed herein. The method includes, according to one aspect, identifying bandwidth capacities of the first and second endpoints, the second endpoint having less bandwidth capacity than the first endpoint and encoding at least a portion of the video bitstream to generate at least one version of a first data partition and a plurality of versions of at least a second data partition. The plurality of versions of the second data partition include at least one high quality version and at least one low quality version of the second data partition. The method also includes transmitting the at least one version of the first partition and the at least one high quality version of the second partition to the first endpoint and transmitting the at least one low quality version of the second partition to the second endpoint.
Also disclosed herein is an apparatus for transmitting a video bitstream to a first and at least a second endpoint with varying bandwidth capacities. The apparatus includes, according to one aspect, a memory and at least one processor configured to execute instructions stored in the memory to identify bandwidth capacities of the first and second endpoints. The second endpoint has less bandwidth capacity than the first endpoint. The at least one processor is also configured to execute instructions stored in the memory to encode at least a portion of the video bitstream to generate at least one version of a first data partition and a plurality of versions of at least a second data partition. The plurality of versions of the second data partition include at least one high quality version and at least one low quality version of the second data partition. Further, the at least one processor is configured to transmit the at least one version of the first partition and the at least one high quality version of the second partition to the first endpoint and transmit the at least one low quality version of the second partition to the second endpoint.
Further, disclosed herein is a method of transmitting a video bitstream that has been partitioned into a first data partition and at least a second data partition, the video bitstream to be sent to a first and at least a second endpoint with varying bandwidth capacities. The method includes, according to one aspect, identifying the first and second endpoints. The second endpoint has less bandwidth capacity than the first endpoint. The method also includes receiving at least one encoded version of a first data partition and a plurality of encoded versions of at least a second data partition. The encoded plurality of versions of the second data partition include at least one encoded high quality version and at least one encoded low quality version of the second data partition. Further, the method includes transmitting the at least one encoded version of the first partition and the at least one encoded high quality version to the first endpoint and transmitting the at least one encoded low quality version of the second partition to the second endpoint.
Also disclosed herein is a method of receiving a video bitstream in a system having a first, a second and at least a third endpoint with varying bandwidth capacities with a video bitstream that has been partitioned into a first data partition and at least a second data partition from the third endpoint and the second endpoint having less bandwidth capacity than the first endpoint. The method includes, according to one aspect obtaining, at the second endpoint, a low quality version of the second partition having less data than a high quality version of the second partition transmitted to the first endpoint and decoding the low quality version of second partition obtained at the second endpoint.
A system for transmitting a video bitstream to endpoints with varying bandwidth capacities is also disclosed herein. The system includes, according to one aspect, a first endpoint, a second endpoint and a third endpoint. The second endpoint has less bandwidth capacity than the first endpoint. The third endpoint is configured to encode at least a portion of the video bitstream to generate at least one version of a first data partition and a plurality of versions of at least a second data partition. The plurality of versions of the second data partition including at least one high quality version and at least one low quality version of the second data partition. The third endpoint is also configured to transmit the at least one version of the first partition and the at least one high quality version of the second partition to the first endpoint and transmit the at least one low quality version of the second partition to the second endpoint.
These and other embodiments will be described in additional detail hereafter.
The various features, advantages and other uses of the present apparatus will become more apparent by referring to the following detailed description and drawing in which:
A network 20 connects endpoint 12 and server 18 to permit transmission of the encoded video stream from endpoint 12 to server 18. A network 22 connects server 18 to endpoint 14 and a network 24 connects server 18 to endpoint 16 to permit distribution of the encoded video stream to endpoint 14 and endpoint 16, respectively. Networks 20, 22 and 24 may be the Internet. Networks 20, 22 and 24 can also be a local area network (LAN), wide area network (WAN), virtual private network (VPN), cellular network, or any other network to permit the transfer of the video stream. Each of networks 20, 22 and 24 can be the same or can be different from one another.
Endpoints 12, 14 and 16 can each be, for example, a computer having an internal configuration of hardware including a processor such as a central processing unit (CPU) 12a, 14a and 16a, respectively, and a memory 12b, 14b and 16b, respectively. CPUs 12a, 14a and 16a can be controllers for controlling the operations of endpoints 12, 14 and 16, respectively. CPU 12a is connected to memory 12b by, for example, a memory bus. Similarly, CPU 14a can be connected to memory 14b and CPU 16a can be connected to memory 16b using memory buses. Memories 12b, 14b and 16b can be random access memory (RAM) or any other suitable memory device. Memories 12b, 14b and 16b can store data and program instructions which are used by CPUs 12a, 14a and 16a, respectively. Other suitable implementations of endpoints 12, 14 and 16 are possible.
Server 18 can be, for example, a computer having an internal configuration of hardware including a processor such as a central processing unit (CPU) 18a and a memory 18b. CPU 18a can be a controller for controlling the operations of server 18. CPU 18a is connected to memory 18b by, for example, a memory bus. Memory 18b can be random access memory (RAM) or any other suitable memory device. Memory 18b can store data and program instructions which are used by the CPU 18a. Other suitable implementations of server 18 are possible.
In some embodiments, system 10 may not include server 18. In these embodiments, the encoded video stream can be transmitted directly from endpoint 12 to endpoint 14 via a network (similar to networks 20, 22 or 24). The encoded video stream can also be transmitted directly from endpoint 12 to endpoint 16 via another network (similar to networks 20, 22 or 24). In other embodiments, system 10 can include more than one server 18. For example, endpoint 12 and endpoint 14 can be connected using one server and endpoint 12 and endpoint 16 can be connected using another server. Further, for example, each endpoint 12, 14 and 16 can be connected to its own individual server and the individual servers can relay the video stream between each other. Thus, for example, a sending endpoint (e.g., endpoint 12) can send the encoded video stream to the server corresponding to the sending endpoint, which in turn can relay the video stream to the server corresponding to a receiving endpoint (e.g., endpoint 14). The server corresponding to the receiving endpoint can in turn send the encoded video stream to the receiving endpoint.
Although system 10, as illustrated, only includes three endpoints, any number of endpoints can be included in system 10. Further, other components can be included in system 10. Each endpoint can be connected to a display (not shown) to display a video stream before encoding and/or a video stream decoded by the decoder in the endpoint. The display can be implemented in various ways, including by a liquid crystal display (LCD) or a cathode-ray tube (CRT).
Other implementations of system 10 are possible. For example, a video stream can be encoded and then stored for transmission at a later time by endpoint 14 or endpoint 16 or any other device having a processor and memory. In another implementation, additional components can be added to the system 10. For example, in a videoconferencing application, a display and a video camera can be attached to endpoint 12 to capture the video stream to be encoded.
As discussed previously, some current systems utilize simulcasting in multipoint video transmission to transmit the same video simultaneously to two or more endpoints having different bandwidth capacities. For example, in current systems having a configuration similar to that illustrated in
As such, endpoint 12 encodes two copies of a video stream (e.g. video stream 50), with each copy being encoded at a different quality. The copy of the video stream encoded at 3000 kbps will be of a higher quality and will be sent to endpoint 14 and the copy of the encoded video stream encoded at 1000 kbps will be of a lower quality and will be sent to endpoint 16. Because the two copies are created from the same video stream, there is common data in both the higher quality and lower quality streams. However, this common data is not considered during the simulcast encoding process and the same data may be unnecessarily sent in both the higher quality stream and lower quality stream. Common data can be information that is the same. However, common data also includes information that is similar. Sending this additional common data increases the upstream bandwidth consumption from endpoint 12.
The embodiments disclosed herein, do not send all or part of this common data, which reduces the upstream bandwidth consumption from endpoint 12. To do this, according to one embodiment, video stream 50 can be partitioned into two or more partitions. During encoding, a single version of the first partition can be encoded and two versions of the second partition can be encoded. Each of the versions of the partitions can be encoded and decoded separately. If there are additional endpoints with varying bandwidth capacities from endpoint 14 and 16, additional versions of the second partition can also be encoded as desired or required. The high quality stream can be composed of the single version of the first partition and a first high quality version of the second partition. The low quality stream can be composed of the single version of the first partition and a second low quality version of the second partition. Only one copy of the single version is, however, transmitted from endpoint 12. The single version of the first partition can contain data that is common to both the higher quality stream and lower quality stream. As such, it is unnecessary to transmit two copies of the common data.
More specifically, video stream 50 can be divided into two or more partitions with one of the partitions (e.g. partition A) containing the prediction mode parameters and motion vectors (hereinafter “prediction information”) for all macroblocks in each frame of the video stream. The remaining partitions (e.g. partition B1 or partition B2) contain the transform coefficients for the residuals (hereinafter “residual information”). These transform coefficients (e.g. discrete cosine transform coefficients) for the residuals are what are added to the predicted block values during the decoding process. The partition containing the prediction information can be decoded without the remaining residual partitions. Higher quality stream 60 and/or lower quality stream 62 can also have more than one partition containing additional residual information. Because partition B1 is of a higher quality than partition B2, partition B1 will generally have more data than partition B2.
Video stream 50 can also be partitioned according to other techniques. For example, the residual information can be contained in the first partition rather than the second partition. The prediction information may also be packed into more than one partition. Further, video stream 50 can be partitioned based on factors other than or in addition to prediction information and residual information.
Once the bandwidth capacities have been determined, control moves to block 74 to encode video bitstream 50 to generate a single version of the first partition (e.g., partition A). Then, control moves to block 76 to encode video bitstream 50 to generate a first high quality version of a second partition (e.g., partition B1) and a second low quality version of a second partition (e.g., partition B2). Blocks 74 and 76 are illustrated separately in
Since partition B2 is being sent to endpoint 16 having less bandwidth capacity, partition B2 will generally have less data (i.e. a reduced amount) than partition B1. Accordingly, the amount of data in partition B2 can be reduced during or subsequent to the encoding process. Some exemplary methods of reducing the amount of data include, for example, increasing the quantization size for partition B2 during the encoding process or down-sampling partition B2. In some instances, the size of video bitstream 50 can be reduced by, for example, frame-dropping selected, pre-determined or random frames, which can in turn reduce the size of partition B2 once generated.
With respect to increasing the quantization size, for example, a first set of quantization levels can be used to generate the first version of the second partition and a second set of quantization levels can be used to generate the second version of the second partition during the encoding process (i.e., block 76). The number of quantization levels in the first set can be greater than the number of quantization levels in the second set. A set having a smaller number of quantization levels can provide greater data reduction than a set having a larger number of quantization levels. Accordingly, to illustrate with a simple example, if the first set has eight quantization levels [1, 6, 9, 10, 15, 21, 22, 24] and the second set has three quantization levels [1, 5, 22], the second set can provide greater data reduction. As such, the second version of the second partition that uses the second set of quantization levels can have a lower bit rate (i.e. lower quality stream) than the first version of the second partition, which uses the first set of quantization levels.
At block 80, once partition A and partitions B1 and B2 have been encoded, they can be transmitted to intermediary point 18.
Intermediary point 18 can, for example, make the identification of the endpoints based on information received from endpoint 12. This information can be contained in the encoded versions of video bitstream 50 or can be sent separate from the encoded versions of video bitstream. Alternatively, intermediary point can be preprogrammed to identify which endpoints receive which encoded versions. Other suitable techniques for identifying the endpoints are also possible. For example, intermediary server 18 can independently make the determination without information from endpoint 12. For example, intermediary server can request that endpoints 14 and 16 send their respective bandwidth capacities. Based on this information, intermediary point 18 can determine which encoded versions of video bitstream 50 should be sent to which endpoints.
Once intermediary point 18 has identified the endpoints such as endpoints 14 and 16, it can transmit the single version of the first partition (partition A) and the first version of the second partition (partition B1) to endpoint 14 at block 96. Similarly, at block 98, intermediary point 18 can transmit the single version of the first partition (partition A) and the second version of the second partition (partition B2) to endpoint 16. As discussed previously, partition A sent to endpoint 14 and partition A sent to endpoint 16 are the same. Since intermediary point 18 only receives a single version of partition A, intermediary point 18 can duplicate partition A so that it can send it to more than one endpoint. Once endpoints 14 and 16 receive their respective partitions, each can perform a decoding process on the data.
In other embodiments, rather than send a single encoded version of the first partition, two versions (e.g. a first version and a second version) of the first partition can be generated with only one of the versions, for example, the first version sent to intermediary point 18. At intermediary point 18, the first version of the first partition can be used to generate the second version of the first partition.
Rather than partition C1 and partition C2 containing the same data as discussed above with respect to the embodiments discuss previously, partition C2 can be created from partition C1. For example, partition C1 can be downsampled to create partition C2. More specifically, since the first partition can contain the motion vectors, the motion vectors can be downsampled. Since partition C2 is being sent to endpoint 16 having less bandwidth capacity, partition C2 will generally have less data than partition C1.
Downsampling of partition C1 can occur during the encoding process of partition C1. More specifically, in one example, partition C1 can be downsampled after prediction information is calculated and before residual information is calculated. In other embodiments, partition C1 can be downsampled after both prediction information and residual information are calculated. Other techniques of generating the second version of the first partition are also possible.
When transmitting higher quality stream partition C1, partition D1 and partition D2 can be transmitted from endpoint 12 to intermediary point 18 without transmitting partition C2. Intermediary point 18 can then create partition C2 by decoding partition C1 and downsampling the motion vectors. In turn, intermediary point 18 can send partition C1 and partition D1 to endpoint 14 and can send partition C2 and partition D2 to endpoint 16. In other embodiments, partition C2, rather than partition C1 is transmitted to intermediary point 18. In turn, partition C2 can be upsampled to create partition C1.
Once partition C2 has been generated, control moves to block 128 to encode video bitstream to generate a first high quality version of a second partition (e.g., partition D1) and a second low quality version of the second partition (e.g., partition D2). As discussed previously, the second version of the first partition can be used to encode the second version of the second partition. Blocks 124 and 126 are illustrated separately in
At block 130, once partition C1 and partitions D1 and D2 have been encoded, they can be transmitted to intermediary point 18 without transmitting partition C2. As discussed previously, transmitting partitions C1, D1 and D2 without transmitting partition C2 reduces the upstream bandwidth consumption from endpoint 12 to intermediary point 18.
At block 146 intermediary point 18 can identify the first and second endpoints that are to receive the encoded versions of video bitstream 50 similar to that described previously at block 94 of
Once intermediary point 18 has identified the endpoints such as endpoints 14 and 16, it can transmit the first version of the first partition (partition C1) and the first version of the second partition (partition D1) to endpoint 14 at block 148. Similarly, at block 150, intermediary point 18 can transmit the second version of the first partition (partition C2) and the second version of the second partition (partition D2) to endpoint 16. Once endpoints 14 and 16 receive their respective partitions, each can perform a decoding process on the data.
The operation of encoding can be performed in many different ways and can produce a variety of encoded data formats. The above-described embodiments of encoding or decoding may illustrate some exemplary encoding techniques. However, in general, encoding and decoding are understood to include any transformation or any other change of data whatsoever.
The embodiments of endpoints 12, 14 or 16 or intermediary point 18 (and the algorithms, methods, instructions etc. stored thereon and/or executed thereby) are implemented in whole or in part by one or more processors which can include computers, servers, or any other computing device or system capable of manipulating or processing information now-existing or hereafter developed including optical processors, quantum processors and/or molecular processors. Suitable processors also include, for example, general purpose processors, special purpose processors, IP cores, ASICS, programmable logic arrays, programmable logic controllers, microcode, firmware, microcontrollers, microprocessors, digital signal processors, memory, or any combination of the foregoing. In the claims, the term “processor” should be understood as including any of the foregoing, either singly or in combination. The terms “signal” and “data” are used interchangeably.
Further, portions of endpoints 12, 14 or 16 or intermediary point 18 do not necessarily have to be implemented in the same manner. Endpoints 12, 14 or 16 or intermediary point 18 can be implemented in whole or in part by one or more computers, servers, processors or any other suitable computing devices or systems that can carry out any of the embodiments described herein. In one embodiment, for example, endpoints 12, 14 or 16 or intermediary point 18 can be implemented using a general purpose computer/processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain specialized hardware for carrying out any of the methods, algorithms and/or instructions described herein.
Further, all or a portion of embodiments of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example contain, store, communicate, and/or transport the program for use by or in connection with any computing system or device. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
While the invention has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not to be limited to the disclosed embodiments but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures as is permitted under the law.
Number | Name | Date | Kind |
---|---|---|---|
338127 | Stark et al. | Apr 1968 | A |
5778082 | Chu et al. | Jul 1998 | A |
5801756 | Iizawa | Sep 1998 | A |
5914949 | Li | Jun 1999 | A |
5936662 | Kim et al. | Aug 1999 | A |
5953050 | Kamata et al. | Sep 1999 | A |
5963547 | O'Neil et al. | Oct 1999 | A |
6011868 | van den Branden et al. | Jan 2000 | A |
6028639 | Bhatt et al. | Feb 2000 | A |
6072522 | Ippolito et al. | Jun 2000 | A |
6163335 | Barraclough | Dec 2000 | A |
6453336 | Beyda et al. | Sep 2002 | B1 |
6603501 | Parry et al. | Aug 2003 | B1 |
6614936 | Wu et al. | Sep 2003 | B1 |
6621514 | Hamilton | Sep 2003 | B1 |
6658618 | Gu et al. | Dec 2003 | B1 |
6757259 | Hamilton | Jun 2004 | B1 |
6775247 | Shaffer et al. | Aug 2004 | B1 |
6795863 | Doty, Jr. | Sep 2004 | B1 |
6941021 | Goldstein et al. | Sep 2005 | B2 |
6992692 | Gu et al. | Jan 2006 | B2 |
7007098 | Smyth et al. | Feb 2006 | B1 |
7084898 | Firestone et al. | Aug 2006 | B1 |
7123696 | Lowe | Oct 2006 | B2 |
7133362 | Chu et al. | Nov 2006 | B2 |
7143432 | Brooks et al. | Nov 2006 | B1 |
7206016 | Gu | Apr 2007 | B2 |
7253831 | Gu | Aug 2007 | B2 |
7321384 | Wu et al. | Jan 2008 | B1 |
7349944 | Vernon et al. | Mar 2008 | B2 |
7352808 | Ratakonda et al. | Apr 2008 | B2 |
7477282 | Firestone et al. | Jan 2009 | B2 |
7558221 | Nelson et al. | Jul 2009 | B2 |
7593031 | Root et al. | Sep 2009 | B2 |
7619645 | Cockerton | Nov 2009 | B2 |
7627886 | Barbanson et al. | Dec 2009 | B2 |
7646736 | Yang et al. | Jan 2010 | B2 |
7664057 | Wu et al. | Feb 2010 | B1 |
7698724 | Day | Apr 2010 | B1 |
7707247 | Dunn et al. | Apr 2010 | B2 |
7716283 | Thukral | May 2010 | B2 |
7759756 | Lee et al. | Jul 2010 | B2 |
7856093 | Fujimori et al. | Dec 2010 | B2 |
7864251 | Gu et al. | Jan 2011 | B2 |
RE42288 | Degioanni | Apr 2011 | E |
7920158 | Beck et al. | Apr 2011 | B1 |
7973857 | Ahmaniemi et al. | Jul 2011 | B2 |
7987492 | Liwerant et al. | Jul 2011 | B2 |
8010652 | Wang et al. | Aug 2011 | B2 |
8060608 | Wang et al. | Nov 2011 | B2 |
8117638 | Perlman | Feb 2012 | B2 |
8164618 | Yang et al. | Apr 2012 | B2 |
8228982 | Qian et al. | Jul 2012 | B2 |
8264521 | Triplicane et al. | Sep 2012 | B2 |
8265168 | Masterson et al. | Sep 2012 | B1 |
8276195 | Hegde et al. | Sep 2012 | B2 |
8379677 | Leung et al. | Feb 2013 | B2 |
8527649 | Wexler et al. | Sep 2013 | B2 |
8549571 | Loher et al. | Oct 2013 | B2 |
20010042114 | Agraharam et al. | Nov 2001 | A1 |
20020033880 | Sul et al. | Mar 2002 | A1 |
20020118272 | Bruce-Smith | Aug 2002 | A1 |
20030091000 | Chu et al. | May 2003 | A1 |
20030160862 | Charlier et al. | Aug 2003 | A1 |
20040119814 | Clisham et al. | Jun 2004 | A1 |
20050008240 | Banerji et al. | Jan 2005 | A1 |
20050062843 | Bowers et al. | Mar 2005 | A1 |
20050140779 | Schulz et al. | Jun 2005 | A1 |
20060023644 | Jang et al. | Feb 2006 | A1 |
20060164552 | Cutler | Jul 2006 | A1 |
20070005804 | Rideout | Jan 2007 | A1 |
20070035819 | Bahatt et al. | Feb 2007 | A1 |
20070081794 | Baynger et al. | Apr 2007 | A1 |
20070120971 | Kennedy | May 2007 | A1 |
20070127671 | Chua et al. | Jun 2007 | A1 |
20070200923 | Eleftheriadis et al. | Aug 2007 | A1 |
20070206091 | Dunn et al. | Sep 2007 | A1 |
20070280194 | Wu et al. | Dec 2007 | A1 |
20070294346 | Moore et al. | Dec 2007 | A1 |
20080218582 | Buckler | Sep 2008 | A1 |
20080246834 | Lunde et al. | Oct 2008 | A1 |
20080267282 | Kalipatnapu et al. | Oct 2008 | A1 |
20080316297 | King et al. | Dec 2008 | A1 |
20090045987 | Cho et al. | Feb 2009 | A1 |
20090079811 | Brandt | Mar 2009 | A1 |
20090164575 | Barbeau et al. | Jun 2009 | A1 |
20090174764 | Chadha et al. | Jul 2009 | A1 |
20100091086 | Hagen | Apr 2010 | A1 |
20100141655 | Belinsky et al. | Jun 2010 | A1 |
20100271457 | Thapa | Oct 2010 | A1 |
20100302446 | Mauchly et al. | Dec 2010 | A1 |
20100309984 | Liu et al. | Dec 2010 | A1 |
20110018962 | Lin et al. | Jan 2011 | A1 |
20110040562 | Doyle et al. | Feb 2011 | A1 |
20110074910 | King et al. | Mar 2011 | A1 |
20110074913 | Kulkarni et al. | Mar 2011 | A1 |
20110131144 | Ashour et al. | Jun 2011 | A1 |
20110141221 | Satterlee et al. | Jun 2011 | A1 |
20110205332 | Jeong et al. | Aug 2011 | A1 |
20110206113 | Bivolarsky et al. | Aug 2011 | A1 |
20120327172 | El-Saban et al. | Dec 2012 | A1 |
20130088600 | Wu et al. | Apr 2013 | A1 |
20130176383 | Satterlee et al. | Jul 2013 | A1 |
20130346564 | Warrick et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
WO2008066593 | Jun 2008 | WO |
WO2010059481 | May 2010 | WO |
WO2010111275 | Sep 2010 | WO |
WO2011150128 | Dec 2011 | WO |
Entry |
---|
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May 2003. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
“Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services”. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
“VP8 Data Format and Decoding Guide”. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Dated May 18, 2011. |
Cisco WebEx, “Share Ideas With Anyone, Anywhere—Online”, Cisco WebEx Meeting Center, Product Overview, 2011 (2 pp). |
Cisco, “Cisco TelePresence Product Portfolio”, Brochure, 2011 (5 pp). |
Bankoski et al. “Technical Overview of VP8, An Open Source Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Internet-Draft, May 18, 2011, 288 pp. |
Firestone, S., et al. “Lip Synchronization in Video Conferencing.” Voice and Video Conferencing Fundamentals. Cisco Systems, Inc. Mar. 2007. |
Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
Overview; VP7 Data Format and Decoder. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
Schulzrinne, H., et al. RTP: A Transport Protocol for Real-Time Applications, RFC 3550. The Internet Society. Jul. 2003. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 1. International Telecommunication Union. Dated May 2003. |
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
VP6 Bitstream & Decoder Specification. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
VP6 Bitstream & Decoder Specification. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
VP8 Data Format and Decoding Guide. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Babonneau, et al., “SSRC Multiplexing for Unicast and Multicast RTP Sessions ,” Network Working Group Internet-Draft (IETF Trust 2010). |