The present disclosure relates in general to video signal transmission and particularly to the encoding and decoding of such a signal.
An increasing number of applications today make use of digital video signals for various purposes including, for example, business meetings between people in remote locations via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. As technology is evolving, users have higher expectations for video quality and resolution even when video signals are transmitted over communications channels having limited bandwidth.
To permit transmission of digital video streams while limiting bandwidth consumption, a number of video compression schemes have been devised, including formats such as VPx, promulgated by Google Inc. of Mountain View, Calif., and H.264, a standard promulgated by ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), including present and future versions thereof. H.264 is also known as MPEG-4 Part 10 or MPEG-4 AVC (formally, ISO/IEC 14496-10).
These compression schemes can use quantization techniques on frames of a digital video stream to reduce the bitrate (i.e. data size) of the encoded digital video stream. These quantization techniques discard part of a frame's data using standard computations, thereby reducing the frame's bitrate. Although these quantization techniques reduce the bitrate, they may not suitably maintain the quality of the video signal.
Disclosed herein are embodiments of methods and apparatuses for encoding a video signal.
One aspect of the disclosed embodiments is a method for encoding a video signal having a plurality of frames, each frame having a plurality of blocks. The encoding method includes identifying a first frame from the plurality of frames as an I-frame, the first frame having an original resolution, determining a variance for the first frame using a processor, and if the variance exceeds an intra threshold: selecting a frame resolution for the first frame that is less than the original resolution, and encoding the first frame using the selected frame resolution.
Another aspect of the disclosed embodiments is a method for determining at least one threshold used for encoding a video signal having a plurality of frames, each frame having a plurality of blocks. The method further includes identifying a test sequence of frames, the frames in the test sequence of frames having an original resolution. The method further includes calculating at least one variance for at least one frame in the test sequence and calculating at least one first PSNR for the at least one frame using the original resolution. The method further includes determining the at least one threshold using the variances and first PSNRs.
Another aspect of the disclosed embodiments is an apparatus for encoding a video signal having at least one frame, each frame having a plurality of blocks, each block having a plurality of pixels. The apparatus comprises a memory and at least one processor configured to execute instructions stored in the memory to: identify a first frame from the plurality of frames as an I-frame, the first frame having an original resolution, determine a variance for the first frame, and if the variance exceeds an intra threshold: select a frame resolution for the first frame that is less than the original resolution, and encode the first frame using the selected frame resolution.
Another method for encoding a video signal having a plurality of frames described herein includes calculating a variance for each test frame in a sequence of test frames, the test frames in the sequence of test frames having an original resolution, calculating a first peak signal-to-noise ratio (PSNR) for each test frame using the original resolution, determining a threshold using the variances and first PSNRs, and providing the threshold to an encoder to select a frame resolution for a first frame of the plurality of frames, the frame resolution being one of the original resolution or a resolution different from the original resolution.
Another apparatus for encoding a video signal having at least one frame described herein includes a memory and a processor. The processor is configured to execute instructions stored in the memory to calculate a variance for each test frame in a sequence of test frames, the test frames in the sequence of test frames having an original resolution, calculate a first peak signal-to-noise ratio (PSNR) for each test frame using the original resolution, determine a threshold using the variances and first PSNRs, and provide the threshold to an encoder to select a frame resolution for a first frame of the plurality of frames, the frame resolution being one of the original resolution or a resolution different from the original resolution.
These and other embodiments will be described in additional detail hereafter.
The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:
A display 18 configured to display a video stream can be connected to transmitting station 12. Display 18 may be implemented in various ways, including by a liquid crystal display (LCD) or a cathode-ray tube (CRT). Display 18 may also be configured for other uses, such as screencasting. Alternatively, or in addition to display 18, a video stream can be generated from a video camera 20 or received from a video file 22 and can be transferred to transmitting station 12.
A video stream can consist of a number of adjacent video frames (i.e. images), which may be still or dynamic. Adjacent video frames can be further subdivided into a single frame. At the next level, the frame can be divided into a series of blocks, which contain data corresponding to, for example, a 16×16 block of displayed pixels. Each block can contain luminance and chrominance data for the corresponding pixels. The blocks can also be of any other suitable size such as 16×8 pixel groups or 8×16 pixel groups. In other embodiments, video stream may only include a single frame and may be in applications such as screencasting.
A network 28 connects transmitting station 12 and a receiving station 30 for encoding and decoding of the video stream. Specifically, the video stream can be encoded by an encoder in transmitting station 12 and the encoded video stream can be decoded by a decoder in receiving station 30. Network 28 may, for example, be the Internet. Network 28 may also be a local area network (LAN), wide area network (WAN), virtual private network (VPN), or any other means of transferring the video stream from transmitting station 12.
Receiving station 30, in one example, may be a computer having an internal configuration of hardware include a processor such as a central processing unit (CPU) 32 and a memory 34. CPU 32 is a controller for controlling the operations of transmitting station 12. CPU 32 can be connected to memory 34 by, for example, a memory bus. Memory 34 may be RAM or any other suitable memory device. Memory 34 stores data and program instructions which are used by CPU 32. Other suitable implementations of receiving station 30 are possible.
The method 50 describes a process for determining whether to encode a GOP at the lower pixel resolution based on frame variance. Encoding a GOP at the lower pixel resolution reduces the bitrate of the encoded GOP and can provide a higher quality encoding at a target bitrate (even if quantization is employed) when the variance of frames are high as compared to the sole use of quantization to achieve a target bitrate of the encoded GOP.
For a current frame in a digital video stream, the encoder first checks if conditions are met to set a pre-determined resolution for the next I-frame in the digital video stream (52). The conditions function as a “watchdog” to revert encoding of the digital video stream from a lower pixel resolution to the original resolution of the frames or to force encoding at a lower pixel resolution when warranted. The conditions can be based on the encoding of a set of P-frames (coming before the current frame) in the current frame's GOP or the previous frame's GOP (for example, where the current frame is an I-frame in a fixed-length GOP encoding).
The following are examples of when the pre-determined resolution is set. In one implementation, the P-frames in the set of P-frames are encoded, at least partially, with respect to a previously encoded frame. As described later, a variance set can be calculated for each P-frame, including an intra-prediction variance and an inter-prediction variance. For example, if the variances in the set of P-frames are mostly small values, the conditions are met to set the pre-determined resolution for the next I-frame to the original resolution of the frames in the digital video stream. Alternatively, if the variances in the set of P-frames are mostly large values, the conditions are met to set the pre-determined resolution for the next I-frame to the lower pixel resolution of the frames in the digital video stream. An encoder can implement both, one, or none of these conditions, depending on the implementation.
In another implementation, the encoder can determine, for example, whether the variances are mostly small values or mostly large values by considering one or both of the variances, and by comparing an aggregate (e.g. the sum, average, or any other aggregate) of those variances to a lower watchdog threshold value and an upper watchdog threshold value. For example, if the aggregate of variances is less than the lower watchdog threshold value, the pre-determined resolution is set to the original resolution. In another example, if the aggregate of variances is greater than the upper watchdog threshold value, the pre-determined resolution is set to the lower resolution. In another example, if neither condition is met, the pre-determined resolution is left as-is.
Alternatively, other methods of evaluating the set of P-frames to determine the pre-determined resolution may be used. In another implementation, each P-frame in the set can be determined to be complex or non-complex by comparing one variance of each frame to an intermediate threshold value. The intermediate threshold value can be, for example, the inter threshold, the intra threshold, or any other determined threshold. A complexity percentage is then calculated using the number of complex frames and the total number of frames in the set of P-frames. If the percentage is less than a lower watchdog threshold, the pre-determined resolution is set to the original resolution. If the percentage is greater than an upper watchdog threshold value, the pre-determined resolution is set to the lower resolution. If neither condition is met, the pre-determined resolution is left as-is.
Next, the encoder determines whether the current frame is an I-frame (54). The current frame may be an I-frame based on, for example, its position in the digital video stream. The current frame can be an I-frame based on a number of alternative considerations that may or may not be used by the encoder. For example, the current frame could be an I-frame if requested by the receiving station 24, or if the pre-determined resolution is set. However, some encoding schemes may require that a GOP be of a fixed length. In these encoding schemes, the I-frame determination would be based on a number of frames since the last I-frame.
If the current frame is an I-frame, the encoder checks if there is a pre-determined resolution set (56). If there is not a pre-determined resolution set, the encoder determines the variance of the current frame (58). A method of determining an I-frame variance is described in more detail later with respect to
Once the frame resolution of the current frame is selected or if there is a pre-determined resolution (from stage 56), the I-frame is encoded using the selected frame resolution or the pre-determined resolution (62). The encoding process can be performed using any encoding scheme, such as various standard video encoding schemes presently available. The encoding process may be performed in parallel to the remainder of method 50. For example, the encoding of stage 62 may be performed on one processor in a computer, whereas the other stages of method 50 may be performed on another processor. Such a scheme would allow for the encoding of a first frame while method 50 determines the resolution of a next frame. Once the current frame is encoded, the method 50 ends.
Returning to stage 54, if the current frame is not an I-frame, the encoder determines whether the current frame's GOP is being encoded at a resolution less than the original resolution (64). If the GOP is at the original resolution, the encoder determines a variance set for the current frame (66). A method of determining a P-frame variance is described in more detail later with respect to
Once the resolution of the next I-frame is selected or if the current frame's GOP is being encoded at a lower resolution (from stage 64), the encoder encodes the current frame using the GOP's selected frame resolution (70). As described previously with respect to stage 62, the encoding of the current frame in stage 70 may be performed on one processor in a computer, whereas the other stages of method 50 may be performed on another processor. Such a scheme would allow for the encoding of a first frame while method 50 determines the resolution of a next frame. Once the current frame is encoded, the method 50 ends.
Referring again back to stage 68, instead of encoding the current frame (stage 70), the encoder can alternatively redefine the current frame as an I-frame and return to stage 54 (as shown by the dotted line). To do so, the encoder must use an encoding scheme that allows for GOPs having varying numbers of frames in the encoded digital video stream.
The intra-prediction variance can be calculated by performing intra prediction on the blocks in the current frame. Intra prediction can be based on previously coded image samples within the current frame. Intra prediction can be performed on a current block by, for example, copying pixels (or filtered pixels) from adjacent, previously coded blocks to form a predicted block. The manner in which the pixels are copied can be by vertical prediction, horizontal prediction, DC prediction, True Motion prediction, southwest prediction, southeast prediction, vertical right diagonal prediction, vertical left diagonal prediction, horizontal down prediction, horizontal up prediction, etc.
Intra prediction can also be performed using a technique other than copying pixel values. For example, a predicted block can be formed for a current block using one or more parameterized equations. These parameterized equations can be, for example, an expression representing a curve that has a “best fit” to a defined set of previously coded pixels in the frame. Other techniques of determining a predicted block using intra prediction are also possible.
A residual block is determined based on the difference between the predicted block and the best-matching block. The intra-prediction variance can then be calculated using the below equations:
i is a x-coordinate within the residual block;
j is a y-coordinate within the residual block;
pi,j is a value of a pixel located at the coordinates of i, j within the residual block; and
N is a number of pixels within the residual block. In addition,
The mean for the residual block is first calculated by averaging the values of all pixels within the residual block. The intra-prediction variance is then calculated by averaging the absolute value of the difference of each pixel from the mean of the residual block. The calculations above are exemplary only, and other similar means of determining the intra-prediction variance may be utilized.
The encoder next adds the calculated intra-prediction variance for the selected block to the intra-prediction variance total for the current frame (90). The encoder then returns to determine whether additional blocks are available within the current frame (stage 84). Once there are no blocks left to process, the encoder then normalizes the intra-prediction variance total (92). Normalization is used to equalize the scale of the intra-prediction variance total with the intra threshold that it will be later compared with. For example, the intra threshold may be of a per-block scale. In such a case, the intra-prediction variance total would be normalized by dividing it by the number of blocks in the current frame, and that result would be used as the frame's variance. In another example, the intra threshold may be of a per-frame scale. In such a case, the intra-prediction variance total would be normalized by leaving the intra-predicting variance total as-is and using it directly as the frame's variance.
As described before, the intra-prediction variance can be calculated by copying pixel values from previously coded blocks, using a parameterized equation or any other possible technique. A residual block is determined based on the difference between the predicted block and the best-matching block. The intra-prediction variance can then be calculated using equations (1) and (2) above or by an equivalent set of calculations.
The inter-prediction variance can be calculated by first performing an inter-frame motion vector search for a best-matching block in a reference frame. The reference frame can be any reference frame available in the encoding scheme used, including a last frame, a last I-frame, or an alternative reference frame. A residual block is determined based on the difference between the current block and the best-matching block. A motion vector is also encoded that describes the position of the best-matching block relative to the position of the current block. The inter-prediction variance can then be calculated using equations (1) and (2) above or by an equivalent set of calculations.
In one embodiment, the inter-prediction variance can be replaced by the intra-prediction variance. The inter-prediction variance is replaced when the intra-prediction is the smallest variance of the two. The replacement can be done because an inter-predicted frame may contain blocks that are both inter predicted and intra predicted. In this case, the encoder may take this into account by using the intra-prediction variance if it finds a better matching block than can be found using inter prediction.
The encoder next adds the calculated intra-prediction variance for the selected block to the intra-prediction variance total for the current frame and adds the calculated inter-prediction variance for the selected block to the inter-prediction variance total for the current frame (130). The encoder then returns to determine whether additional blocks are available within the current frame (stage 124). Once there are no blocks left to process, the encoder then normalizes the intra-prediction variance total and the inter-prediction variance total (122). The normalization process is the same as that described previously with respect to stage 92. The normalized intra-prediction variance total is the intra variance and the normalized inter-prediction variance total is the inter variance. The intra variance and the inter variance together form the variance set of the current frame.
If the intra variance is not greater than the intra threshold, the next I-frame resolution is set to the original resolution (146). Otherwise, if it is greater, the next I-frame resolution is set to a resolution less than the original resolution (148). And referring back to stage 142, if the inter variance is not greater than the inter threshold, the next I-frame resolution is left as-is (150).
The method 160 operates on a test sequence of frames. The test sequence of frames can contain video data similar to that expected to be encoded. For example, in a screen casting encoding application, the test sequence of frames could be an exemplary screen casting video data stream. In another example, if an encoder could be used for screen casting and for encoding of moving pictures (i.e. video clips and/or movies), the test sequence of frames can include both a screen casting video data stream and a moving picture video data stream. The test sequence of frame can also be based on video data from other sources.
Once variables are initialized, the method 160 next checks to see if any frames are left to process in the test sequence of frames (164). If there is at least one frame left, the next frame for processing is selected (166). The variance of the selected frame is calculated using a method such as method 80 of determining an intra frame variance (168). The selected frame is encoded and then decoded using its original resolution (170). The encoding is performed to create an encoded frame that is within a target bitrate.
An original resolution peak signal-to-noise-ratio (PSNR_O) will be calculated using the frame and the decoded frame (172). A PSNR value is a measure of quality comparing the original frame and a lossy-encoded reconstructed (decoded) frame. In this case, the PSNR_O measures the quality of the resulting decoded frame after being compressed to the target bitrate using techniques other than the changing of pixel resolution (i.e. quantization).
The PSNR can be calculated using a mean squared error (MSE). The PSNR alternatively can be calculated using other means. One exemplary equation for calculating the MSE and PSNR is provided:
i is a x-coordinate;
j is a y-coordinate;
S is the selected frame;
D is the decoded frame;
m is the width of the frames S and D;
n is the height of the frames S and D.
D is the decoded frame. In addition,
MAXs is the maximum possible pixel value of the selected frame.
Once the PSNR_O has been calculated, the selected frame will be downsampled to a resolution less than the original resolution, the downsampled frame will be encoded and then decoded, and the decoded downsampled frame will then be upsampled to the original resolution (174). As with the encoding of the original resolution frame, the encoding of the downsampled frame is performed using a target bitrate. The purpose is to create a decoded upsampled frame for comparison with the selected frame. The resolution of the downsampled frame can be determined using one or more pre-determined lower resolutions. Alternatively, the resolution of the downsampled frame can be determined on a frame-by-frame basis, selected by a user or any other technique.
A lower resolution peak signal-to-noise-ratio (PSNR_L) is then calculated (176). In this case, the PSNR_L measures the quality of the resulting decoded upsampled frame after being compressed to the target bitrate using the technique of changing the pixel resolution.
Once the intra variance, PSNR_O, and PSNR_L have been calculated for the selected frame, the method 160 returns to stage 164 to determine if any additional frames are available in the test sequence of frames. Once there are no frames left, the method 160 includes plotting the variance, PSNR_O, and PSNR_L values calculated for each frame (178). The plot includes two series of data. The first series includes the variance for each frame versus the PSNR_O value for each frame. The second series includes the variance for each frame versus the PSNR_L value for each frame.
The first and second series can be plotted using fitted curve techniques. For example, an approximate fitted curve function can be determined to approximate each series. The fitted curve techniques used can include techniques such as the least squares method. Alternatively, the first and second series can be plotted using their actual values. Plotting may not involve the actual placement of data points on a coordinate plane. Rather, plotting may merely be an intermediate step performed by a processor.
Next, an intersection between the first series and the second series is determined (180). The intersection may be determined computationally by a processor based on the fitted curves determined for each series. But the intersection can also be determined using other methods. For example, a programmer or other person may select the intersection based on a plot of each series on a coordinate plane. The selected intersection is the intra threshold (182). Alternatively, the selected intersection's value may be multiplied by a constant or processed by a standard function to normalize it for use in the encoder as the intra threshold.
The method 190 operates on a test sequence of frames. The test sequence of frames can contain video data similar to that expected to be encoded. For example, in a screen casting encoding application, the test sequence of frames could be an exemplary screen casting video data stream. In another example, if an encoder could be used for screen casting and for encoding of moving pictures (i.e. video clips and/or movies), the test sequence of frames can include both a screen casting video data stream and a moving picture video data stream.
Once variables are initialized, the method 190 next checks to see if any frames are left to process in the test sequence of frames (194). If there is at least one frame left, the next frame for processing is selected (196). The variance of the selected frame is calculated using a method such as method 120 that includes determining an inter frame variance (198). The selected frame is encoded and then decoded using its original resolution (200). The encoding is performed to create an encoded frame that is within a target bitrate.
A peak signal-to-noise-ratio (PSNR) will be calculated using the frame and the decoded frame (202). As discussed previously, a PSNR value is a measure of quality comparing the original frame and a lossy-encoded reconstructed (decoded) frame. In this case, the PSNR measures the quality of the resulting decoded frame after being compressed to the target bitrate using techniques other than the changing of pixel resolution (i.e. quantization). The PSNR can be calculated using a MSE, such as described previously.
Once the inter variance and PSNR have been calculated for the selected frame, the method 190 returns to stage 194 to determine if any additional frames are available in the test sequence of frames. Once there are no frames left, a candidate variance set is selected that contains a series of inter variances and PSNR values for frames where the PSNR value exceeds a PSNR threshold (204). The candidate variance set can alternately include only the selected inter variances. The largest variance in the candidate variance set is then identified (206). This identified maximum (largest) variance value is the inter threshold (208). Alternatively, the identified maximum variance value may be multiplied by a constant or processed by a standard function to normalize it for use in the encoder as the inter threshold.
The above-described embodiments of encoding or decoding may illustrate some exemplary encoding techniques. However, in general, encoding and decoding as those terms are used in the claims are understood to mean compression, decompression, transformation or any other change to data whatsoever.
The embodiments of transmitting station 12 and/or receiving station 30 (and the algorithms, methods, instructions etc. stored thereon and/or executed thereby) can be implemented in hardware, software, or any combination thereof including, for example, IP cores, ASICS, programmable logic arrays, quantum or molecular processors, optical processors, programmable logic controllers, microcode, firmware, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any the foregoing devices, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, portions of transmitting station 12 and receiving station 30 do not necessarily have to be implemented in the same manner.
Further, in one embodiment, for example, transmitting station 12 or receiving station 30 can be implemented using a general purpose computer/processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain specialized hardware for carrying out any of the methods, algorithms, or instructions described herein.
Transmitting station 12 and receiving station 30 can, for example, be implemented on computers in a screencasting system. Alternatively, transmitting station 12 can be implemented on a server and receiving station 30 can be implemented on a device separate from the server, such as a hand-held communications device (i.e. a cell phone). In this instance, transmitting station 12 can encode content using an encoder into an encoded video signal and transmit the encoded video signal to the communications device. In turn, the communications device can then decode the encoded video signal using a decoder. Alternatively, the communications device can decode content stored locally on the communications device (i.e. no transmission is necessary). Other suitable transmitting station 12 and receiving station 30 implementation schemes are available. For example, receiving station 30 can be a personal computer rather than a portable communications device.
Further, all or a portion of embodiments of the present invention can take the form of a computer program product accessible from, for example, a computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.
The above-described embodiments have been described in order to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.
This application is a continuation of U.S. patent application Ser. No. 13/095,968, filed Apr. 28, 2011, the entire content of which is incorporated herein in its entirety by reference. This application is related to U.S. patent application Ser. No. 13/095,967, filed Apr. 28, 2011, and U.S. patent application Ser. No. 13/096,285, filed Apr. 28, 2011, each of which is incorporated herein in its entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5262855 | Alattar et al. | Nov 1993 | A |
5452435 | Malouf et al. | Sep 1995 | A |
5638114 | Hatanaka et al. | Jun 1997 | A |
5731840 | Kikuchi et al. | Mar 1998 | A |
5801756 | Iizawa | Sep 1998 | A |
5805222 | Nakagawa et al. | Sep 1998 | A |
6021213 | Helterbrand et al. | Feb 2000 | A |
6025870 | Hardy | Feb 2000 | A |
6091777 | Guetz et al. | Jul 2000 | A |
6195391 | Hancock et al. | Feb 2001 | B1 |
6204847 | Wright | Mar 2001 | B1 |
6243683 | Peters | Jun 2001 | B1 |
6266337 | Marco | Jul 2001 | B1 |
6346963 | Katsumi | Feb 2002 | B1 |
6363067 | Chung | Mar 2002 | B1 |
6421387 | Rhee | Jul 2002 | B1 |
6462791 | Zhu | Oct 2002 | B1 |
6483454 | Torre et al. | Nov 2002 | B1 |
6556588 | Wan et al. | Apr 2003 | B2 |
6577333 | Tai et al. | Jun 2003 | B2 |
6587985 | Fukushima et al. | Jul 2003 | B1 |
6681362 | Abbott et al. | Jan 2004 | B1 |
6684354 | Fukushima et al. | Jan 2004 | B2 |
6707852 | Wang | Mar 2004 | B1 |
6711209 | Lainema et al. | Mar 2004 | B1 |
6728317 | Demos | Apr 2004 | B1 |
6732313 | Fukushima et al. | May 2004 | B2 |
6741569 | Clark | May 2004 | B1 |
6812956 | Ferren et al. | Nov 2004 | B2 |
6816836 | Basu et al. | Nov 2004 | B2 |
6918077 | Fukushima et al. | Jul 2005 | B2 |
6952450 | Cohen | Oct 2005 | B2 |
7007098 | Smyth et al. | Feb 2006 | B1 |
7007235 | Hussein et al. | Feb 2006 | B1 |
7015954 | Foote et al. | Mar 2006 | B1 |
7114129 | Awada et al. | Sep 2006 | B2 |
7124333 | Fukushima et al. | Oct 2006 | B2 |
7143091 | Charnock et al. | Nov 2006 | B2 |
7178106 | Lamkin et al. | Feb 2007 | B2 |
7180896 | Okumura | Feb 2007 | B1 |
7197070 | Zhang et al. | Mar 2007 | B1 |
7219062 | Colmenarez et al. | May 2007 | B2 |
7263644 | Park et al. | Aug 2007 | B2 |
7313283 | Kondo et al. | Dec 2007 | B2 |
7356750 | Fukushima et al. | Apr 2008 | B2 |
7372834 | Kim et al. | May 2008 | B2 |
7376880 | Ichiki et al. | May 2008 | B2 |
7379653 | Yap et al. | May 2008 | B2 |
7424056 | Lin et al. | Sep 2008 | B2 |
7447235 | Luby et al. | Nov 2008 | B2 |
7447969 | Park et al. | Nov 2008 | B2 |
7484157 | Park et al. | Jan 2009 | B2 |
7532764 | Lee et al. | May 2009 | B2 |
7577898 | Costa et al. | Aug 2009 | B2 |
7636298 | Miura et al. | Dec 2009 | B2 |
7664185 | Zhang et al. | Feb 2010 | B2 |
7664246 | Krantz et al. | Feb 2010 | B2 |
7680076 | Michel et al. | Mar 2010 | B2 |
7684982 | Taneda | Mar 2010 | B2 |
7710973 | Rumbaugh et al. | May 2010 | B2 |
7735111 | Michener et al. | Jun 2010 | B2 |
7739714 | Guedalia | Jun 2010 | B2 |
7756127 | Nagai et al. | Jul 2010 | B2 |
7797274 | Strathearn et al. | Sep 2010 | B2 |
7822607 | Aoki et al. | Oct 2010 | B2 |
7823039 | Park et al. | Oct 2010 | B2 |
7860718 | Lee et al. | Dec 2010 | B2 |
7864210 | Kennedy | Jan 2011 | B2 |
7876820 | Auwera et al. | Jan 2011 | B2 |
7974243 | Nagata et al. | Jul 2011 | B2 |
8010185 | Ueda | Aug 2011 | B2 |
8019175 | Lee et al. | Sep 2011 | B2 |
8060651 | Deshpande et al. | Nov 2011 | B2 |
8085767 | Lussier et al. | Dec 2011 | B2 |
8087056 | Ryu | Dec 2011 | B2 |
8130823 | Gordon et al. | Mar 2012 | B2 |
8160130 | Ratakonda et al. | Apr 2012 | B2 |
8161159 | Shetty et al. | Apr 2012 | B1 |
8175041 | Shao et al. | May 2012 | B2 |
8176524 | Singh et al. | May 2012 | B2 |
8179983 | Gordon et al. | May 2012 | B2 |
8233539 | Kwon | Jul 2012 | B2 |
8265450 | Black et al. | Sep 2012 | B2 |
8307403 | Bradstreet et al. | Nov 2012 | B2 |
8385422 | Sato | Feb 2013 | B2 |
8443398 | Swenson et al. | May 2013 | B2 |
8448259 | Haga et al. | May 2013 | B2 |
8494053 | He et al. | Jul 2013 | B2 |
8553776 | Shi et al. | Oct 2013 | B2 |
8566886 | Scholl | Oct 2013 | B2 |
20020003573 | Yamaguchi et al. | Jan 2002 | A1 |
20020085637 | Henning | Jul 2002 | A1 |
20020140851 | Laksono | Oct 2002 | A1 |
20020152318 | Menon et al. | Oct 2002 | A1 |
20020157058 | Ariel et al. | Oct 2002 | A1 |
20020176604 | Shekhar et al. | Nov 2002 | A1 |
20020191072 | Henrikson | Dec 2002 | A1 |
20030012287 | Katsavounidis et al. | Jan 2003 | A1 |
20030016630 | Vega-Garcia et al. | Jan 2003 | A1 |
20030061368 | Chaddha | Mar 2003 | A1 |
20030098992 | Park et al. | May 2003 | A1 |
20030226094 | Fukushima et al. | Dec 2003 | A1 |
20030229822 | Kim et al. | Dec 2003 | A1 |
20030229900 | Reisman | Dec 2003 | A1 |
20040071170 | Fukuda | Apr 2004 | A1 |
20040105004 | Rui et al. | Jun 2004 | A1 |
20040165585 | Imura et al. | Aug 2004 | A1 |
20040172252 | Aoki et al. | Sep 2004 | A1 |
20040172255 | Aoki et al. | Sep 2004 | A1 |
20040184444 | Aimoto et al. | Sep 2004 | A1 |
20040196902 | Faroudja | Oct 2004 | A1 |
20040233938 | Yamauchi | Nov 2004 | A1 |
20050041150 | Gewickey et al. | Feb 2005 | A1 |
20050076272 | Delmas et al. | Apr 2005 | A1 |
20050117653 | Sankaran | Jun 2005 | A1 |
20050125734 | Mohammed et al. | Jun 2005 | A1 |
20050154965 | Ichiki et al. | Jul 2005 | A1 |
20050157793 | Ha et al. | Jul 2005 | A1 |
20050180415 | Cheung et al. | Aug 2005 | A1 |
20050185715 | Karczewicz et al. | Aug 2005 | A1 |
20050220188 | Wang | Oct 2005 | A1 |
20050238243 | Kondo et al. | Oct 2005 | A1 |
20050251856 | Araujo et al. | Nov 2005 | A1 |
20050259729 | Sun | Nov 2005 | A1 |
20050276327 | Lee et al. | Dec 2005 | A1 |
20060013310 | Lee et al. | Jan 2006 | A1 |
20060039470 | Kim et al. | Feb 2006 | A1 |
20060066717 | Miceli | Mar 2006 | A1 |
20060146940 | Gomila et al. | Jul 2006 | A1 |
20060150055 | Quinard et al. | Jul 2006 | A1 |
20060153217 | Chu et al. | Jul 2006 | A1 |
20060188014 | Civanlar et al. | Aug 2006 | A1 |
20060215014 | Cohen et al. | Sep 2006 | A1 |
20060215752 | Lee et al. | Sep 2006 | A1 |
20060247927 | Robbins et al. | Nov 2006 | A1 |
20060248563 | Lee et al. | Nov 2006 | A1 |
20060282774 | Covell et al. | Dec 2006 | A1 |
20060291475 | Cohen | Dec 2006 | A1 |
20070036354 | Wee et al. | Feb 2007 | A1 |
20070064094 | Potekhin et al. | Mar 2007 | A1 |
20070080971 | Sung | Apr 2007 | A1 |
20070081522 | Apelbaum | Apr 2007 | A1 |
20070081587 | Raveendran et al. | Apr 2007 | A1 |
20070097257 | El-Maleh et al. | May 2007 | A1 |
20070121100 | Divo | May 2007 | A1 |
20070168824 | Fukushima et al. | Jul 2007 | A1 |
20070195893 | Kim et al. | Aug 2007 | A1 |
20070223529 | Lee et al. | Sep 2007 | A1 |
20070237226 | Regunathan et al. | Oct 2007 | A1 |
20070237232 | Chang et al. | Oct 2007 | A1 |
20070250754 | Costa et al. | Oct 2007 | A1 |
20070268964 | Zhao | Nov 2007 | A1 |
20070285505 | Korneliussen | Dec 2007 | A1 |
20080037624 | Walker et al. | Feb 2008 | A1 |
20080043832 | Barkley et al. | Feb 2008 | A1 |
20080063054 | Ratakonda et al. | Mar 2008 | A1 |
20080072267 | Monta et al. | Mar 2008 | A1 |
20080089414 | Wang et al. | Apr 2008 | A1 |
20080101403 | Michel et al. | May 2008 | A1 |
20080109707 | Dell et al. | May 2008 | A1 |
20080126278 | Bronstein et al. | May 2008 | A1 |
20080134005 | Izzat et al. | Jun 2008 | A1 |
20080144553 | Shao et al. | Jun 2008 | A1 |
20080209300 | Fukushima et al. | Aug 2008 | A1 |
20080250294 | Ngo et al. | Oct 2008 | A1 |
20080260042 | Shah et al. | Oct 2008 | A1 |
20080270528 | Girardeau et al. | Oct 2008 | A1 |
20080273591 | Brooks et al. | Nov 2008 | A1 |
20090006927 | Sayadi et al. | Jan 2009 | A1 |
20090007159 | Rangarajan et al. | Jan 2009 | A1 |
20090010325 | Nie et al. | Jan 2009 | A1 |
20090022157 | Rumbaugh et al. | Jan 2009 | A1 |
20090031390 | Rajakarunanayake et al. | Jan 2009 | A1 |
20090059067 | Takanohashi et al. | Mar 2009 | A1 |
20090059917 | Lussier et al. | Mar 2009 | A1 |
20090080510 | Wiegand et al. | Mar 2009 | A1 |
20090103635 | Pahalawatta | Apr 2009 | A1 |
20090122867 | Mauchly et al. | May 2009 | A1 |
20090138784 | Tamura et al. | May 2009 | A1 |
20090144417 | Kisel et al. | Jun 2009 | A1 |
20090161763 | Rossignol et al. | Jun 2009 | A1 |
20090180537 | Park et al. | Jul 2009 | A1 |
20090219993 | Bronstein et al. | Sep 2009 | A1 |
20090237728 | Yamamoto | Sep 2009 | A1 |
20090238277 | Meehan | Sep 2009 | A1 |
20090241147 | Kim et al. | Sep 2009 | A1 |
20090245351 | Watanabe | Oct 2009 | A1 |
20090249158 | Noh et al. | Oct 2009 | A1 |
20090254657 | Melnyk et al. | Oct 2009 | A1 |
20090268819 | Nishida | Oct 2009 | A1 |
20090276686 | Liu et al. | Nov 2009 | A1 |
20090276817 | Colter et al. | Nov 2009 | A1 |
20090307428 | Schmieder et al. | Dec 2009 | A1 |
20090322854 | Ellner | Dec 2009 | A1 |
20100026608 | Adams et al. | Feb 2010 | A1 |
20100040349 | Landy | Feb 2010 | A1 |
20100054333 | Bing et al. | Mar 2010 | A1 |
20100077058 | Messer | Mar 2010 | A1 |
20100122127 | Oliva et al. | May 2010 | A1 |
20100149301 | Lee et al. | Jun 2010 | A1 |
20100153828 | De Lind Van Wijngaarden et al. | Jun 2010 | A1 |
20100171882 | Cho et al. | Jul 2010 | A1 |
20100192078 | Hwang et al. | Jul 2010 | A1 |
20100202414 | Malladi et al. | Aug 2010 | A1 |
20100220172 | Michaelis | Sep 2010 | A1 |
20100235583 | Gokaraju et al. | Sep 2010 | A1 |
20100235820 | Khouzam et al. | Sep 2010 | A1 |
20100306618 | Kim et al. | Dec 2010 | A1 |
20100309372 | Zhong | Dec 2010 | A1 |
20100309982 | Le Floch et al. | Dec 2010 | A1 |
20110010629 | Castro et al. | Jan 2011 | A1 |
20110026582 | Bauza et al. | Feb 2011 | A1 |
20110026593 | New et al. | Feb 2011 | A1 |
20110033125 | Shiraishi | Feb 2011 | A1 |
20110051955 | Cui et al. | Mar 2011 | A1 |
20110051995 | Guo et al. | Mar 2011 | A1 |
20110069890 | Besley | Mar 2011 | A1 |
20110093273 | Lee et al. | Apr 2011 | A1 |
20110103480 | Dane | May 2011 | A1 |
20110131144 | Ashour et al. | Jun 2011 | A1 |
20110158529 | Malik | Jun 2011 | A1 |
20110194605 | Amon et al. | Aug 2011 | A1 |
20110218439 | Masui et al. | Sep 2011 | A1 |
20110219331 | DeLuca et al. | Sep 2011 | A1 |
20120013705 | Taylor et al. | Jan 2012 | A1 |
20120044383 | Lee | Feb 2012 | A1 |
20120084821 | Rogers | Apr 2012 | A1 |
20120110443 | Lemonik et al. | May 2012 | A1 |
20120206562 | Yang et al. | Aug 2012 | A1 |
20120275502 | Hsieh et al. | Nov 2012 | A1 |
20120294355 | Holcomb et al. | Nov 2012 | A1 |
20120294369 | Bhagavathy et al. | Nov 2012 | A1 |
20130031441 | Ngo et al. | Jan 2013 | A1 |
20130039412 | Narroschke et al. | Feb 2013 | A1 |
Number | Date | Country |
---|---|---|
1777969 | Apr 2007 | EP |
0715711 | Jan 1995 | JP |
WO0249356 | Jun 2002 | WO |
WO2008006062 | Jan 2008 | WO |
Entry |
---|
Chae-Eun Rhee et al. (:A Real-Time H.264/AVC Encoder with Complexity-Aware Time Allocation, Circuits and Systems for video Technology, IEEE Transactions on, vol. 20, No. 12, pp. 1848, 1862, Dec. 2010). |
Gachetti (Matching techniques to compute image motion, Image and Vision Computing, vol. 18, No. 3, Feb. 2000, pp. 247-260. |
Ahn et al., Flat-region Detection and False Contour Removal in the Digital TV Display, http://www.cecs.uci.edu/˜papers/icme05/defevent/papers/cr1737.pdf. |
Daly et al., Decontouring: Prevention and Removal of False Contour Artifacts, from Conference vol. 5292, Human Vision and Electronic Imaging IX, Jun. 7, 2004. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 1. International Telecommunication Union. Dated May 2003. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005. |
“Overview; VP7 Data Format and Decoder”. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services”. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006. |
“VP6 Bitstream & Decoder Specification”. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007. |
“VP6 Bitstream & Decoder Specification”. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009. |
“Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video”. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010. |
“Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services”. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010. |
“VP8 Data Format and Decoding Guide”. WebM Project. Google On2. Dated: Dec. 1, 2010. |
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Dated May 18, 2011. |
Bankoski et al. “Technical Overview of VP8, an Open Source Video Codec for the Web”. Dated Jul. 11, 2011. |
Bankoski, J., Koleszar, J., Quillio, L., Salonen, J., Wilkins, P., and Y. Xu, “VP8 Data Format and Decoding Guide”, RFC 6386, Nov. 2011. |
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp. |
U.S. Appl. No. 13/095,967, filed Apr. 28, 2011. |
U.S. Appl. No. 13/096,285, filed Apr. 28, 2011. |
Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video, Video coding for low bit rate communication, International Telecommunication Union, ITU-T Recommendation H.263, Feb. 1998, 167 pp. |
Chen, Yu, et al., “An Error Concealment Algorithm for Entire Frame Loss in Video Transmission,” Picture Coding Symposium, 2004. |
European Search Report for European Patent Application No. 08146463.1 dated Jun. 23, 2009. |
Feng, Wu-chi; Rexford, Jennifer; “A Comparison of Bandwidth Smoothing Techniques for the Transmission of Prerecorded Compressed Video”, Paper, 1992, 22 pages. |
Friedman, et al., “RTP: Control Protocol Extended Reports (RTPC XR),” Network Working Group RFC 3611 (The Internet Society 2003) (52 pp). |
Frossard, Pascal; “Joint Source/FEC Rate Selection for Quality-Optimal MPEG-2 Video Delivery”, IEEE Transactions on Image Processing, vol. 10, No. 12, (Dec. 2001) pp. 1815-1825. |
Hartikainen, E. and Ekelin, S. Tuning the Temporal Characteristics of a Kalman-Filter Method for End-to-End Bandwidth Estimation. IEEE E2EMON. Apr. 3, 2006. |
International Search Report for International Application No. PCT/EP2009/057252 mailed on Aug. 27, 2009. |
JongWon Kim, Young-Gook Kim, HwangJun Song, Tien-Ying Kuo, Yon Jun Chung, and C.-C. Jay Kuo; “TCP-friendly Internet Video Streaming employing Variable Frame-rate Encoding and Interpolation”; IEEE Trans. Circuits Syst. Video Technology, Jan. 2000; vol. 10 pp. 1164-1177. |
Khronos Group Inc. OpenMAX Integration Layer Application Programming Interface Specification. Dec. 16, 2005, 326 pages, Version 1.0. |
Korhonen, Jari; Frossard, Pascal; “Flexible forward error correction codes with application to partial media data recovery”, Signal Processing: Image Communication vol. 24, No. 3 (Mar. 2009) pp. 229-242. |
Li, A., “RTP Payload Format for Generic Forward Error Correction”, Network Working Group, Standards Track, Dec. 2007, (45 pp). |
Liang, Y.J.; Apostolopoulos, J.G.; Girod, B., “Analysis of packet loss for compressed video: does burst-length matter?,” Acoustics, Speech and Signal Processing, 2003. Proceedings. (ICASSP '03). 2003 IEEE International conference on, vol. 5, no., pp. V, 684-7 vol. 5, Apr. 6-10, 2003. |
Neogi, A., et al., Compression Techniques for Active Video Content; State University of New York at Stony Brook; Computer Science Department; pp. 1-11. |
Peng, Qiang, et al., “Block-Based Temporal Error Concealment for Video Packet Using Motion Vector Extrapolation,” IEEE 2003 Conference of Communications, Circuits and Systems and West Sino Expositions, vol. 1, No. 29, pp. 10-14 (IEEE 2002). |
Roca, Vincent, et al., Design and Evaluation of a Low Density Generator Matrix (LDGM) Large Block FEC Codec, INRIA Rhone-Alpes, Planete project, France, Date Unknown, (12 pp). |
Scalable Video Coding, SVC,Annex G extension of H264. |
Yan, Bo and Gharavi, Hamid, “A Hybrid Frame Concealment Algorithm for H.264/AVC,” IEEE Transactions on Image Processing, vol. 19, No. 1, pp. 98-107 (IEEE, Jan. 2010). |
Yoo, S. J.B., “Optical Packet and burst Switching Technologies for the Future Photonic Internet, ” Lightwave Technology, Journal of, vol. 24, No. 12, pp. 4468, 4492, Dec. 2006. |
Yu, Xunqi, et al; “The Accuracy of Markov Chain Models in Predicting Packet-Loss Statistics for a Single Multiplexer”, IEEE Transaactions on Information Theory, vol. 54, No. 1 (Jan. 2008) pp. 489-501. |
Number | Date | Country | |
---|---|---|---|
Parent | 13095968 | Apr 2011 | US |
Child | 14175127 | US |