Motion estimation aided noise reduction

Information

  • Patent Grant
  • 9131073
  • Patent Number
    9,131,073
  • Date Filed
    Friday, March 2, 2012
    12 years ago
  • Date Issued
    Tuesday, September 8, 2015
    9 years ago
Abstract
A method and apparatus for performing motion estimation aided noise reduction encoding and decoding are provided. Motion estimation aided noise reduction encoding can include identifying a motion vector for encoding a current block in a video frame based on a reference block in a reference frame, identifying a noise reduction block for denoising the current block, aligning the noise reduction block with the current block, denoising the current block, identifying a motion vector for encoding the denoised block, generating a residual block for encoding the denoised block, and encoding the denoised block. Denoising the current block can include identifying a filter coefficient for a pixel in the current block based on a corresponding pixel in the noise reduction block, producing a denoised pixel based on the coefficient and the corresponding pixel, and determining whether to use the denoised pixel or the current pixel for encoding the block.
Description
TECHNICAL FIELD

This application relates to video encoding and decoding.


BACKGROUND

Digital video can be used, for example, for remote business meetings via video conferencing, high definition video entertainment, video advertisements, or sharing of user-generated videos. Accordingly, it would be advantageous to provide high resolution video transmitted over communications channels having limited bandwidth.


SUMMARY

A method and apparatus for performing motion estimation aided noise reduction encoding and decoding are provided. Motion estimation aided noise reduction encoding can include identifying a motion vector for encoding a current block in a video frame based on a reference block in a reference frame, identifying a noise reduction block for denoising the current block, aligning the noise reduction block with the current block, denoising the current block, identifying a motion vector for encoding the denoised block, generating a residual block for encoding the denoised block, and encoding the denoised block. Denoising the current block can include identifying a filter coefficient for a pixel in the current block based on a corresponding pixel in the noise reduction block, producing a denoised pixel based on the coefficient and the corresponding pixel, and determining whether to use the denoised pixel or the current pixel for encoding the block.





BRIEF DESCRIPTION OF THE DRAWINGS

The description herein makes reference to the accompanying drawings wherein like reference numerals refer to like parts throughout the several views, and wherein:



FIG. 1 is a schematic of a video encoding and decoding system;



FIG. 2 is a diagram of a typical video stream to be encoded and decoded;



FIG. 3 is a block diagram of a video compression system in accordance with one embodiment;



FIG. 4 is a block diagram of a video decompression system in accordance with another embodiment;



FIG. 5 is a block diagram of motion estimation aided noise reduction encoding for a series of frames in accordance with implementations of this disclosure;



FIG. 6 is a diagram of motion estimation aided noise reduction encoding in accordance with implementations of this disclosure;



FIG. 7 is a diagram of generating a denoised block in accordance with implementations of this disclosure; and



FIG. 8 is a diagram of generating a denoised block using multiple noise reduction blocks in accordance with implementations of this disclosure.





DETAILED DESCRIPTION

Digital video is used for various purposes including, for example, remote business meetings via video conferencing, high definition video entertainment, video advertisements, and sharing of user-generated videos. Digital video streams can include formats such as VP8, promulgated by Google, Inc. of Mountain View, Calif., and H.264, a standard promulgated by ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG), including present and future versions thereof.


Video encoding and decoding can include performing motion estimation to generate a motion vector for encoding a block in a current frame based on a corresponding block in a reference frame of a video signal. The video signal can include noise, such as random variations of pixel values. For example, a pixel, such as a background pixel, may include noise, such as a random change in color or brightness when compared to a corresponding pixel in a previous frame. Noise reduction can be performed at the encoder to reduce the noise in the video signal. Motion estimation aided noise reduction can include using the motion vector identified during motion estimation in conjunction with a noise reduction block to efficiently reduce noise in the signal while minimizing blurring and/or artifacting, for example, by replacing a pixel value with an average pixel value. For example, the average pixel value can be generated using a motion compensated noise reduction pixel from the noise reduction block and the pixel value.


A noise reduction block can include pixels used for noise reduction of a block in another frame, such as previous frame in the video sequence. For example, a noise reduction block can be used for denoising a first block in a first frame in the video signal. The noise reduction frame can include information, such as denoised pixel values, from denoising the first block, and can be used for denoising a second block in a second frame in the video signal.



FIG. 1 is a schematic of a video encoding and decoding system. An exemplary transmitting station 12 can be, for example, a computing device having an internal configuration of hardware including a processor such as a central processing unit (CPU) 14 and a memory 16. The CPU 14 can be a controller for controlling the operations of the transmitting station 12. The CPU 14 is connected to the memory 16 by, for example, a memory bus. The memory 16 can be random access memory (RAM) or any other suitable memory device. The memory 16 can store data and program instructions which are used by the CPU 14. Other suitable implementations of the transmitting station 12 are possible. As used herein, the term “computing device” includes a server, a hand-held device, a laptop computer, a desktop computer, a special purpose computer, a general purpose computer, or any device, or combination of devices, capable of processing information, programmed to perform the methods, or any portion thereof, disclosed herein.


A network 28 can connect the transmitting station 12 and a receiving station 30 for encoding and decoding of the video stream. Specifically, the video stream can be encoded in the transmitting station 12 and the encoded video stream can be decoded in the receiving station 30. The network 28 can, for example, be the Internet. The network 28 can also be a local area network (LAN), wide area network (WAN), virtual private network (VPN), a mobile phone network, or any other means of transferring the video stream from the transmitting station 12.


The receiving station 30, in one example, can be a computing device having an internal configuration of hardware including a processor such as a central processing unit (CPU) 32 and a memory 34. The CPU 32 is a controller for controlling the operations of the receiving station 30. The CPU 32 can be connected to the memory 34 by, for example, a memory bus. The memory 34 can be RAM or any other suitable memory device. The memory 34 stores data and program instructions which are used by the CPU 32. Other suitable implementations of the receiving station 30 are possible.


A display 36 configured to display a video stream can be connected to the receiving station 30. The display 36 can be implemented in various ways, including by a liquid crystal display (LCD) or a cathode-ray tube (CRT). The display 36 can be configured to display a video stream decoded at the receiving station 30.


Other implementations of the encoder and decoder system 10 are possible. For example, one implementation can omit the network 28 and/or the display 36. In another implementation, a video stream can be encoded and then stored for transmission at a later time by the receiving station 30 or any other device having memory. In an implementation, the receiving station 30 receives (e.g., via network 28, a computer bus, and/or some communication pathway) the encoded video stream and stores the video stream for later decoding. In another implementation, additional components can be added to the encoder and decoder system 10. For example, a display or a video camera can be attached to the transmitting station 12 to capture the video stream to be encoded.



FIG. 2 is a diagram of a typical video stream 50 to be encoded and decoded. The video stream 50 includes a video sequence 52. At the next level, the video sequence 52 includes a number of adjacent frames 54. While three frames are depicted in adjacent frames 54, the video sequence 52 can include any number of adjacent frames. The adjacent frames 54 can then be further subdivided into a single frame 56. At the next level, the single frame 56 can be divided into a series of blocks 58. Although not shown in FIG. 2, a block 58 can include pixels. For example, a block can include a 16×16 group of pixels, an 8×8 group of pixels, an 8×16 group of pixels, or any other group of pixels. Unless otherwise indicated herein, the term ‘block’ can include a macroblock, a segment, a slice, or any other portion of a frame. A frame, a block, a pixel, or a combination thereof can include display information, such as luminance information, chrominance information, or any other information that can be used to store, modify, communicate, or display the video stream or a portion thereof.



FIG. 3 is a block diagram of an encoder 70 in accordance with one embodiment. The encoder 70 can be implemented, as described above, in the transmitting station 12 such as by providing a computer software program stored in memory 16, for example. The computer software program can include machine instructions that, when executed by CPU 14, cause transmitting station 12 to encode video data in the manner described in FIG. 3. Encoder 70 can also be implemented as specialized hardware included, for example, in transmitting station 12. Encoder 70 can also be implemented as specialized hardware included, for example, in transmitting station 12. The encoder 70 encodes an input video stream 50. The encoder 70 has the following stages to perform the various functions in a forward path (shown by the solid connection lines) to produce an encoded or a compressed bitstream 88: an intra/inter prediction stage 72, a transform stage 74, a quantization stage 76, and an entropy encoding stage 78. The encoder 70 can also include a reconstruction path (shown by the dotted connection lines) to reconstruct a frame for encoding of further blocks. The encoder 70 has the following stages to perform the various functions in the reconstruction path: a dequantization stage 80, an inverse transform stage 82, a reconstruction stage 84, and a loop filtering stage 86. Other structural variations of the encoder 70 can be used to encode the video stream 50.


When the video stream 50 is presented for encoding, each frame 56 within the video stream 50 is processed in units of blocks. At the intra/inter prediction stage 72, each block can be encoded using either intra-frame prediction, which may be within a single frame, or inter-frame prediction, which may be from frame to frame. In either case, a prediction block can be formed. In the case of intra-prediction, a prediction block can be formed from samples in the current frame that have been previously encoded and reconstructed. In the case of inter-prediction, a prediction block can be formed from samples in one or more previously constructed reference frames.


Next, still referring to FIG. 3, the prediction block can be subtracted from the current block at the intra/inter prediction stage 72 to produce a residual block. The transform stage 74 transforms the residual block into transform coefficients in, for example, the frequency domain. Examples of block-based transforms include the Karhunen-Loève Transform (KLT), the Discrete Cosine Transform (DCT), and the Singular Value Decomposition Transform (SVD). In one example, the DCT transforms the block into the frequency domain. In the case of DCT, the transform coefficient values are based on spatial frequency, with the lowest frequency (i.e. DC) coefficient at the top-left of the matrix and the highest frequency coefficient at the bottom-right of the matrix.


The quantization stage 76 converts the transform coefficients into discrete quantum values, which are referred to as quantized transform coefficients or quantization levels. The quantized transform coefficients are then entropy encoded by the entropy encoding stage 78. Entropy encoding can include using a probability distribution metric. The entropy-encoded coefficients, together with the information used to decode the block, which may include the type of prediction used, motion vectors, and quantizer value, are then output to the compressed bitstream 88. The compressed bitstream 88 can be formatted using various techniques, such as run-length encoding (RLE) and zero-run coding.


The reconstruction path in FIG. 3 (shown by the dotted connection lines) can be used to help provide that both the encoder 70 and a decoder 100 (described below) with the same reference frames to decode the compressed bitstream 88. The reconstruction path performs functions that are similar to functions that take place during the decoding process that are discussed in more detail below, including dequantizing the quantized transform coefficients at the dequantization stage 80 and inverse transforming the dequantized transform coefficients at the inverse transform stage 82 to produce a derivative residual block. At the reconstruction stage 84, the prediction block that was predicted at the intra/inter prediction stage 72 can be added to the derivative residual block to create a reconstructed block. The loop filtering stage 86 can be applied to the reconstructed block to reduce distortion such as blocking artifacts.


Other variations of the encoder 70 can be used to encode the compressed bitstream 88. For example, a non-transform based encoder 70 can quantize the residual block directly without the transform stage 74. In another embodiment, an encoder 70 can have the quantization stage 76 and the dequantization stage 80 combined into a single stage.



FIG. 4 is a block diagram of a decoder 100 in accordance with another embodiment. The decoder 100 can be implemented in a device, such as the receiving station 30 described above, for example, by providing a computer software program stored in memory 34. The computer software program can include machine instructions that, when executed by CPU 32, cause receiving station 30 to decode video data in the manner described in FIG. 4. Decoder 100 can also be implemented as specialized hardware included, for example, in transmitting station 12 or receiving station 30.


The decoder 100, similar to the reconstruction path of the encoder 70 discussed above, includes in one example the following stages to perform various functions to produce an output video stream 116 from the compressed bitstream 88: an entropy decoding stage 102, a dequantization stage 104, an inverse transform stage 106, an intra/inter prediction stage 108, a reconstruction stage 110, a loop filtering stage 112 and a deblocking filtering stage 114. Other structural variations of the decoder 100 can be used to decode the compressed bitstream 88.


When the compressed bitstream 88 is presented for decoding, the data elements within the compressed bitstream 88 can be decoded by the entropy decoding stage 102 (using, for example, Context Adaptive Binary Arithmetic Decoding) to produce a set of quantized transform coefficients. The dequantization stage 104 dequantizes the quantized transform coefficients, and the inverse transform stage 106 inverse transforms the dequantized transform coefficients to produce a derivative residual block that can be identical to that created by the inverse transformation stage 84 in the encoder 70. Using header information decoded from the compressed bitstream 88, the decoder 100 can use the intra/inter prediction stage 108 to create the same prediction block as was created in the encoder 70. At the reconstruction stage 110, the prediction block can be added to the derivative residual block to create a reconstructed block. The loop filtering stage 112 can be applied to the reconstructed block to reduce blocking artifacts. The deblocking filtering stage 114 can be applied to the reconstructed block to reduce blocking distortion, and the result is output as the output video stream 116.


Other variations of the decoder 100 can be used to decode the compressed bitstream 88. For example, the decoder 100 can produce the output video stream 116 without the deblocking filtering stage 114.



FIG. 5 is a block diagram of motion estimation aided noise reduction encoding for a series of frames in accordance with one embodiment of this disclosure. Implementations of motion estimation aided noise reduction encoding can include using a first noise reduction frame 500, a first unencoded frame 502, or both to generate a first denoised frame 504. In an implementation, generating the first denoised frame 504 can include including a pixel from the first noise reduction frame 500, a pixel from the first unencoded frame 502, a pixel based on a pixel from the first noise reduction frame 500 and a pixel from the first unencoded frame 502, or a combination thereof, in the first denoised frame 504. The first unencoded frame 502 can be a frame from a video sequence, such as the video sequence 52 shown in FIG. 2, for example.


Motion estimation aided noise reduction encoding can include using the first denoised frame 504 to generate a first encoded frame 506. Motion estimation aided noise reduction encoding can include using the first noise reduction frame 500, the first denoised frame 504, or both to generate a second noise reduction frame 510. Generating the second noise reduction frame 510 can include including a pixel from the first noise reduction frame 500, a pixel from the first denoised frame 504, a pixel based on a pixel from the first noise reduction frame 500 and a pixel from the first denoised frame 504, or a combination thereof, in the second noise reduction frame 510.


The second noise reduction frame 510, a second unencoded frame 512, or both can be used to generate a second denoised frame 514. The second unencoded frame 512 can be a second frame in the video sequence. The second denoised frame 514 can be used to generate a second encoded frame 516. The second noise reduction frame 510, the second denoised frame 514, or both can be used to generate a third noise reduction frame 520. The third noise reduction frame 520, a third unencoded frame 522, or both can be used to generate a third denoised frame 524. The third unencoded frame 514 can be a third frame in the video sequence. The third denoised frame 524 can be used to generate a third encoded frame 526. The first encoded frame 506, the second encoded frame 516, the third encoded frame 526, or a combination thereof can be used to generate an encoded video stream, such as the compressed bitstream 88 shown in FIGS. 3 and 5.



FIG. 6 is a diagram of motion estimation aided noise reduction encoding in accordance with one embodiment of this disclosure. Implementations of motion estimation aided noise reduction encoding can include identifying a frame at 600, identifying a current block in the frame at 610, identifying a motion vector for encoding the current block at 620, denoising the block at 630, identifying a motion vector for encoding the denoised block at 640, generating a residual block at 650, generating an encoded block at 660, determining whether to encode another block at 670, determining whether to encode another frame at 680, or a combination thereof. In an implementation, a device, such as the transmitting station 12 shown in FIG. 1, can perform motion estimation aided noise reduction encoding. For example, motion estimation aided noise reduction encoding, or any portion thereof, can be implemented in an encoder, such as the encoder 70 shown in FIG. 3.


As an example, a frame, such as frame 56 shown in FIG. 2, can be identified for encoding at 600. For example, the frame can include an 8×8 matrix of blocks as shown in FIG. 2. Identifying the frame can include identifying a current frame, a reference frame, a noise reduction frame, or a combination thereof.


A block (current block) in the current frame can be identified at 610. For example, the current block can be identified based on Cartesian coordinates. The current block can include pixels. For example, the current block can include a 16×16 matrix of pixels.


A motion vector (MV) for encoding the current block can be identified at 620. For example, the motion vector can be identified based on the current block and a reference block from a reference frame using a method of motion estimation, such as a motion search. Identifying a motion vector can include generating a prediction mode for encoding the current block, generating a sum of squared errors (SSE) for the current block, or both. Identifying the motion vector can include identifying a zero magnitude motion vector (MV0). The zero magnitude motion vector MV0 can indicate a block in the reference frame that is collocated with the current block in the current frame.


The current block can be denoised at 630. Denoising the current block can include identifying a noise reduction block at 632 and generating a denoised block at 634. Identifying the noise reduction block at 632 can include using a noise reduction block from a noise reduction frame (e.g., 500), determining whether a difference between the current block and the reference block is noise, aligning the noise reduction block with the current block, or a combination thereof.


Determining whether a difference between the current block and the reference block is noise can include determining whether the magnitude of the motion vector is less than a threshold (T1), which can indicate noise. On a condition that the magnitude of the motion vector is greater than the threshold T1, the noise reduction block can be aligned with the current block. The motion vector can, for example, include Cartesian coordinates, such as an X coordinate, a Y coordinate, or both, and the magnitude of the motion vector can be a function of the coordinates, such as a square root of the sum of the X coordinate squared and the Y coordinate squared.


The motion vector (MV) identified at 620 can be a non-zero motion vector for the current block that is identified based on noise, such as random variations in pixel values in the current frame, the reference frame, or both. For example, the motion search may determine that a location of the reference block in the reference frame that best matches the current block is slightly different than the location of the current block in the current frame based on noise.


Determining whether a difference between the current block and the reference block is noise can include determining whether the difference between a sum of squared errors (SSE) of the motion vector and an SSE of a zero magnitude motion vector is less than a threshold (T2). The SSEs can be determined based on the sum of squared differences between the pixels of the reference block and the current block.


For example, the current block may include an M×N matrix of pixels, such as a 16×16 matrix of pixels, and a pixel in the current block can be expressed as B(M,N). The reference block indicated by the motion vector may include an M×N matrix of pixels, and a pixel in the reference block can be expressed as RB(M,N). The block in the reference frame indicated by the zero magnitude motion vector MV0 may include an M×N matrix of pixels, and a pixel in the block in the reference frame indicated by the zero magnitude motion vector MV0 can be expressed as RB0(M,N). The motion vector can indicate a row offset MVx and a column offset MVy. In an implementation, the SSE associated with the motion vector SSEr may be expressed as:

SSErx=1;y=1M;NB(x,y)−RB(x,y).  [Equation 1]


The SSE associated with the zero magnitude motion vector SSE0 may be expressed as:

SSE0x=1;y=1M;NB(x,y)−RB0(x,y).  [Equation 2]


On a condition that the magnitude of the motion vector identified at 620 is less than T1 and the difference between SSEr and SSE0 is less than T2, the motion vector may be a small motion vector that indicates noise. On a condition that the magnitude of the motion vector is less than T1, or the difference between SSEr and SSE0 is greater than T2, the noise reduction block may be aligned with the current block.


At 634, a denoised block can be generated using the current block and the noise reduction block. For example, a denoised block can be generated as shown in FIG. 7. As shown in FIG. 6, a motion vector for encoding the denoised block can be identified at 640. In an implementation, identifying the motion vector for encoding the denoised block can include using the motion vector identified for encoding the current block at 620 as the motion vector for encoding the denoised block.


In an implementation, identifying the motion vector for encoding the denoised block can include determining whether to use a zero magnitude motion vector to a reference frame for encoding the denoised block. For example, the motion vector identified for encoding the current block may indicate an intra prediction mode and a zero magnitude motion vector to a reference frame can be used for encoding the denoised block. In an implementation, a zero magnitude motion vector to the reference frame may produce a smaller residual than the motion vector identified at 620, and the zero magnitude motion vector can be used for encoding the denoised block. Alternatively, or in addition, identifying the motion vector for encoding the denoised block at 640 can include motion estimation, which can include generating a motion vector for encoding the denoised block based on the denoised block and the reference frame.


A residual block can be generated based on the denoised block at 650, and an encoded block can be generated using the residual block at 660. Whether to encode another block can be determined at 670. For example, unencoded blocks in the current frame can be encoded. Whether to encode another frame can be determined at 680. For example, the noise reduction block can be used to encode another frame, such as a future frame, in the video sequence.


Although not shown in FIG. 6, an encoded video stream, such as the compressed bit stream 88 shown in FIG. 3, can be generated using the encoded block(s) generated at 660. The encoded video stream can be transmitted, stored, further processed, or a combination thereof. For example, the encoded video stream can be stored in a memory, such as the memory 16 and/or 34 shown in FIG. 1. The encoded video stream can also be transmitted to a decoder, such as the decoder 100 shown in FIG. 4.


Other implementations of the diagram of motion estimation aided noise reduction encoding as shown in FIG. 6 are available. In implementations, additional elements of motion estimation aided noise reduction encoding can be added, certain elements can be combined, and/or certain elements can be removed. For example, in an implementation, motion estimation aided noise reduction encoding can include an additional element involving determining whether to denoise a block, and if the determination is not to denoise, the element at 630 of denoising a block can be skipped and/or omitted for one or more blocks and/or frames. Alternatively, or in addition, motion estimation aided noise reduction encoding can include controlling an amount of noise reduction, which can be based on an estimate of the amount of noise, such as an estimate of noise variance or a probability density function.



FIG. 7 is a diagram of generating a denoised block (e.g., at 634) in accordance with one embodiment of this disclosure. Implementations of, generating a denoised block can include identifying pixels at 700, identifying a coefficient at 710, applying a weight (e.g., weighting the coefficient) at 720, producing a denoised pixel at 730, evaluating the denoised pixel at 740, processing the pixels at 750, determining whether to produce another denoised pixel at 760, or a combination thereof.


More specifically, as an example, pixels for generating the denoised block can be identified at 700. Identifying the pixels can include identifying a current pixel (Pk) in a current block (e.g., as the block identified at 610) and/or identifying a noise reduction pixel (NRPk) in a noise reduction block (e.g., the noise reduction block identified at 632). Identifying the noise reduction pixel can be based on the current pixel. For example, the location of the current pixel in the current block may correspond with the location of the noise reduction pixel in the noise reduction block.


A coefficient (alpha′) can be identified at 710. The coefficient can be identified based on the current pixel and the noise reduction pixel. For example, the coefficient can be identified based on a function of the magnitude of the difference between the current pixel and the noise reduction pixel. In one implementation, the coefficient may be calculated as follows:

alpha′=1/(1+(|Pk−NRPk|)/8).  [Equation 3]


The coefficient can be weighted at 720. The coefficient can be weighted based on the motion vector. For example, the coefficient can be weighted based on a function of the coefficient and a magnitude of the motion vector (e.g., the motion vector identified at 620). In one implementation, the magnitude of the motion vector and a threshold can be used to indicate whether the current block includes noise. If noise is indicated in the current block, the coefficient can be increased. If noise is not indicated in the current block, the coefficient can be decreased. For example, in one implementation, the weighting can be expressed as:

alpha′=(|MV|2<2*T1)→alpha′+alpha′/(3+|MV|2/10);
alpha′=(|MV|2≧8*T1)→0;
alpha′=[0,1].  [Equation 4]


A denoised pixel Pk′ can be produced at 730. The denoised pixel can be produced based on a function of the current pixel Pk, the coefficient alpha′, the noise reduction pixel NRPk, or a combination thereof. In one implementation, a denoised pixel may be calculated (produced) as follows:

Pk′=alpha′*NRPk+(1−alpha′)*Pk.  [Equation 5]


The denoised pixel Pk′ can be evaluated at 740. Evaluating the denoised pixel Pk′ can include determining whether to include the denoised pixel in the denoised block being generated, determining whether to include the denoised pixel in the noise reduction block being used for generating the denoised block, determining whether to include the current pixel in the denoised block being generated, determining whether to include the current pixel in the noise reduction block being used for generating the denoised block, or a combination thereof. In at least one implementation, the current pixels and/or denoised pixels can be included in a noise reduction block used for generating a next denoised block instead of or in addition to the noise reduction block being used for generating the denoised block.


The evaluation can be based on the denoised pixel and the current pixel. For example, the difference between the current pixel and the denoised pixel can be less than a threshold (T3) and can indicate a small change. In one example, if a small change is indicated, the denoised pixel Pk′ can be included in the denoised block being generated and the noise reduction block being used for generating the denoised block. In another example, if a small change is not indicated, the current pixel Pk can be included in the denoised block being generated and the noise reduction block being used for generating the denoised block. Alternatively or additionally, if a small change is not indicated, the current pixel Pk can be included in the denoised block being generated and the denoised pixel Pk′ can be included in the noise reduction block being used for generating the denoised block. In an implementation, the noise reduction block used for generating the denoised block can include pixels from the current block, denoised pixels, or both, and can be used for denoising a block in a future frame.


Evaluating the denoised pixel Pk′ can also include evaluating the current block. Evaluating the current block can include determining whether the sum of squared errors (SSE) of the motion vector identified at 620 SSEr is less than a threshold (T4), which can indicate that the current block includes a shift in mean. For example, the SSEr can be less than or equal to the threshold T4 and the denoised pixel Pk′ can be included in the denoised block being generated and the noise reduction block being used for generating the denoised block. In another example, the SSEr can be greater than the threshold T4 and the current pixel Pk can be included in the denoised block being generated and the noise reduction block being used for generating the denoised block.


Although described separately, evaluating the difference between the current pixel and the denoised pixel, and evaluating the SSEr can be performed in combination. For example, the difference between the current pixel and the denoised pixel can be less than a threshold (T3) and the SSEr can be less than the threshold T4, and the denoised pixel Pk′ can be included in the denoised block being generated and the noise reduction block being used for generating the denoised block.


The pixels, such as the current pixel, the denoised pixel, the noise reduction pixel, or a combination thereof, can be processed at 750. Processing the pixels can include including the denoised pixel in the denoised block, including the current pixel in the denoised block, including the denoised pixel in the noise reduction block, including the current pixel in the noise reduction block, or a combination thereof. The denoised pixel, the current pixel, or a combination thereof can be included in the denoised block, the noise reduction block, or a combination thereof based on the evaluation above.


For example, in an implementation, the denoised pixel produced at 730 can be included in the denoised block being generated at 634 and the noise reduction block being used to generate the denoised block being generated at 634 on a condition that the evaluation indicates a small difference between the current pixel and the denoised pixel, which can be expressed as (Pk−Pk′)2<T2, and a small SSEr, which can be expressed as SSEr<T3. Including the denoised pixel in the noise reduction block can include replacing the noise reduction pixel with the denoised pixel. In an implementation, the difference between the current pixel and the denoised pixel may be greater than a threshold or the SSEr may be greater than a threshold, and the current pixel can be included in the noise reduction block, the denoised block, or both. In another implementation, the difference between the current pixel and the denoised pixel may be greater than a threshold or the SSEr may be greater than a threshold, and the denoised pixel can be included in the noise reduction block.


Although described separately for clarity, and while the terms are not used interchangeably, the current block and the denoised block can be the same block. For example, including a denoised pixel in the denoised block can include replacing the current pixel with the denoised pixel in the current block. The noise reduction block can be included in a noise reduction frame (e.g., 510) and can be used for encoding another frame, such as a future frame, in the video sequence.


Other implementations of the diagram of generating a denoised block as shown in FIG. 7 are available. In implementations, additional elements of generating a denoised block can be added, certain elements can be combined, and/or certain elements can be removed. For example, in an implementation, generating a denoised block at 730 can include a combination of evaluating the pixels as shown at 740 and processing the pixels as shown at 750.


Although FIG. 7 describes an example of denoising using one noise reduction block for simplicity, implementations of this disclosure can include using any number of noise reduction blocks. For example, motion estimation aided noise reduction can include using multiple blocks from N previous frames, which may be denoised frames, as noise reduction blocks. FIG. 8 is a diagram of generating a denoised block using multiple noise reduction blocks in accordance with one embodiment of this disclosure. Motion estimation aided noise reduction can include identifying noise reduction blocks at 800, identifying taps at 810, applying weights (e.g., weighting the taps) at 820, producing a denoised pixel at 830, evaluating the denoised pixel at 840, processing the pixels at 850, determining whether to produce another denoised pixel at 860, or a combination thereof.


More specifically, as an example, noise reduction blocks can be identified at 800. Each noise reduction block (NRBi,j) can be associated with a block (j) of a frame (i). For example, a first noise reduction block NRB1,j can be associated with a block from a first frame, which may be a previous frame in the video sequence, and a second noise reduction block NRB2,j can be associated with a block from a second frame, which may be a previous frame in the video sequence. The location of the first noise reduction block in the first frame can correspond with the location of the second noise reduction block in the second frame. The noise reduction blocks can be denoised blocks. The noise reduction blocks can be unencoded blocks. Identifying the noise reduction block can include aligning each of the noise reduction blocks with the current block. Aligning the noise reduction blocks can include finding the noise reduced block which, according to the motion vectors, match the current block. In an implementation, aligning can include adjusting positions of the noise reduction blocks within an noise reduction frame based on the motion vectors. In an implementation, a noise reduction pixel (NRPi,j,k) can be a pixel (k) in a noise reduction block (j) in a frame (i).


Taps can be identified at 810. A tap (TAPi,j,k) can be associated with a pixel (k) of a block (j) of a frame (i) and can indicate a pixel (k) value weighting metric. For example, a tap TAP0,j,k can be associated with the current block of a current frame and can indicate a metric for weighting the current pixel Pk. A tap TAP1,j,k can be associated with the first noise reduction block NRB1,1 and can indicate a metric for weighting a noise reduction pixel in the first noise reduction block NRB1. A tap TAP2,j,k can be associated with the second noise reduction block NRB2 and can indicate a metric for weighting a noise reduction pixel in the second noise reduction block.


The taps can be weighted at 820. The taps can indicate equal pixel value weights. For example, the magnitude of the motion vector can be zero and the weighting metrics can be equal. In an implementation, the taps can be weighted based on proximity, such as in cases where the magnitude of the motion vector is greater than zero. For example, a first tap can be associated with a first noise reduction block. The first noise reduction block can be based on a first previous frame, which can precede the current frame in the video sequence. A second tap can be associated with a second noise reduction block. The second noise reduction block can be based on a second previous frame, which can precede the first previous frame in the video sequence. The magnitude of the motion vector can be non-zero and the first tap can weighted more heavily than the second tap.


A denoised pixel Pk′ can be produced at 830. Producing the denoised pixel can include identifying a noise reduction pixel NRPi,j,k from each noise reduction block NRBi,j based on the location of the current pixel Pk in the current block of the current frame. Producing a denoised pixel Pk′ can include determining a weighted pixel value NRPi,j,k′ for each noise reduction pixel NRPi,j,k based on an associated tap, determining a weighted pixel value for the current pixel, and determining a sum of the weighted pixel values. For example, a first tap TAP1,1,k can be associated with the first noise reduction block NRB1,1 and can indicate a metric of ¼ for weighting a noise reduction pixel NRP1,1,1 in the first noise reduction block. For example, producing a denoised pixel using N taps may be expressed as:

Pk′=TAP0,1,k*Pkj=1N(TAPi,j,k*NRPi,j,k).  [Equation 6]


Producing the denoised pixel can include determining the variance among the weighted pixel values. For example, the variance can be the difference between the sum of the weighted pixel values and the mean of the weighted pixel values. Determining the variance may be expressed as:

Var(k)=(Σj=1N(NRPi,j,k2))−(1/j=1NNRPi,j,k)2.  [Equation 7]


The denoised pixel can be evaluated at 840. The evaluation can be based on the denoised pixel and the current pixel. For example, the difference between the current pixel and the denoised pixel can be less than a threshold and can indicate a small change. The evaluation can include determining whether the variance is less than a threshold. For example, the variance can be less than a threshold and can indicate a small variance.


The pixels can be processed at 850 based on the indication of small change and/or the indication of small variance. Processing the pixels can include including the denoised pixel in the denoised block, including the current pixel in the denoised block, or a combination thereof. For example, the evaluation can indicate a small change and a small variance, and the denoised pixel can be included in the denoised block. Processing the pixels can include including the denoised pixel in a noise reduction block, including the current pixel in a noise reduction block, or a combination thereof. For example, the denoising can include using four noise reducing blocks, (e.g., identified at 800), and the denoised pixel can be included in the first noise reducing block.


Other implementations of the diagram of generating a denoised block using multiple noise reduction blocks as shown in FIG. 8 are available. In implementations, additional elements of generating a denoised block using multiple noise reduction blocks can be added, certain elements can be combined, and/or certain elements can be removed. For example, in an implementation, generating a denoised block using multiple noise reduction blocks can include combining in one element the evaluation of the pixels (as shown at 840) and the processing of the pixels (as shown at 850).


Motion estimation aided noise reduction encoding, or any portion thereof, can be implemented in a device, such as the transmitting station 12 shown in FIG. 1. For example, an encoder, such as the encoder 70 shown in FIG. 3, can implement motion estimation aided noise reduction encoding, or any portion thereof, using instruction stored on a tangible, non-transitory, computer readable media, such as memory 34 shown in FIG. 1.


The words “example” or “exemplary” are used herein to mean serving as an example, instance, or illustration. Any aspect or design described herein as “example” or “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects or designs. Rather, use of the words “example” or “exemplary” is intended to present concepts in a concrete fashion. As used in this application, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or”. That is, unless specified otherwise, or clear from context, “X includes A or B” is intended to mean any of the natural inclusive permutations. That is, if X includes A; X includes B; or X includes both A and B, then “X includes A or B” is satisfied under any of the foregoing instances. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from context to be directed to a singular form. Moreover, use of the term “an embodiment” or “one embodiment” or “an implementation” or “one implementation” throughout is not intended to mean the same embodiment or implementation unless described as such. As used herein, the terms “determine” and “identify”, or any variations thereof, includes selecting, ascertaining, computing, looking up, receiving, determining, establishing, obtaining, or otherwise identifying or determining in any manner whatsoever using one or more of the devices shown in FIG. 1.


Further, for simplicity of explanation, although the figures and descriptions herein may include sequences or series of steps or stages, elements of the methods disclosed herein can occur in various orders and/or concurrently. Additionally, elements of the methods disclosed herein may occur with other elements not explicitly presented and described herein. Furthermore, not all elements of the methods described herein may be required to implement a method in accordance with the disclosed subject matter.


The embodiments of encoding and decoding described above illustrate some exemplary encoding and decoding techniques. However, it is to be understood that encoding and decoding, as those terms are used in the claims, could mean compression, decompression, transformation, or any other processing or change of data.


The embodiments of the transmitting station 12 and/or the receiving station 30 (and the algorithms, methods, instructions, etc. stored thereon and/or executed thereby) can be realized in hardware, software, or any combination thereof. The hardware can include, for example, computers, intellectual property (IP) cores, application-specific integrated circuits (ASICs), programmable logic arrays, optical processors, programmable logic controllers, microcode, microcontrollers, servers, microprocessors, digital signal processors or any other suitable circuit. In the claims, the term “processor” should be understood as encompassing any of the foregoing hardware, either singly or in combination. The terms “signal” and “data” are used interchangeably. Further, portions of the transmitting station 12 and the receiving station 30 do not necessarily have to be implemented in the same manner.


Further, in one embodiment, for example, the transmitting station 12 or the receiving station 30 can be implemented using a general purpose computer or general purpose/processor with a computer program that, when executed, carries out any of the respective methods, algorithms and/or instructions described herein. In addition or alternatively, for example, a special purpose computer/processor can be utilized which can contain specialized hardware for carrying out any of the methods, algorithms, or instructions described herein.


The transmitting station 12 and receiving station 30 can, for example, be implemented on computers in a real-time video system. Alternatively, the transmitting station 12 can be implemented on a server and the receiving station 30 can be implemented on a device separate from the server, such as a hand-held communications device. In this instance, the transmitting station 12 can encode content using an encoder 70 into an encoded video signal and transmit the encoded video signal to the communications device. In turn, the communications device can then decode the encoded video signal using a decoder 100. Alternatively, the communications device can decode content stored locally on the communications device, for example, content that was not transmitted by the transmitting station 12. Other suitable transmitting station 12 and receiving station 30 implementation schemes are available. For example, the receiving station 30 can be a generally stationary personal computer rather than a portable communications device and/or a device including an encoder 70 may also include a decoder 100.


Further, all or a portion of embodiments of the present invention can take the form of a computer program product accessible from, for example, a tangible computer-usable or computer-readable medium. A computer-usable or computer-readable medium can be any device that can, for example, tangibly contain, store, communicate, or transport the program for use by or in connection with any processor. The medium can be, for example, an electronic, magnetic, optical, electromagnetic, or a semiconductor device. Other suitable mediums are also available.


The above-described embodiments have been described in order to allow easy understanding of the present invention and do not limit the present invention. On the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the scope of the appended claims, which scope is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structure as is permitted under the law.

Claims
  • 1. A method for encoding a flame in a video stream, the frame having a plurality of blocks, the method comprising: Identifying a noise reduction frame;identifying a first motion vector for encoding a current block of the plurality of blocks;
  • 2. The method of claim 1, wherein identifying the noise reduction block includes using the first motion vector to identify the noise reduction block that corresponds with the current block on a condition that a magnitude of the first motion vector is greater than a first threshold.
  • 3. The method of claim 1, wherein identifying the noise reduction block includes using the first motion vector to identify the noise reduction block that corresponds with the current block on a condition that a difference between a sum of squared errors of the first motion vector and a sum of squared errors of a zero magnitude motion vector is greater than a second threshold.
  • 4. The method of claim 1, wherein: the current block includes a plurality of current pixels;the noise reduction block includes a plurality of noise reduction pixels; andgenerating the denoised block includes: identifying a filter coefficient based on a current pixel from the plurality of current pixels and a noise reduction pixel from the plurality of noise reduction pixels,producing a denoised pixel based on the filter coefficient, the current pixel, the noise reduction pixel, andincluding the denoised pixel in the denoised block.
  • 5. The method of claim 1, wherein identifying the second motion vector includes using the first motion vector as the second motion vector.
  • 6. The method of claim 1, wherein: identifying the noise reduction block includes identifying a plurality of noise reduction blocks from a plurality of noise reduction frames that includes the noise reduction frame, wherein a first noise reduction block in the plurality of noise reduction blocks is associated with the reference block and a second noise reduction block in the plurality of noise reduction blocks is associated with another reference block.
  • 7. The method of claim 6, wherein: the current block includes a plurality of current pixels;the first noise reduction block includes a first plurality of noise reduction pixels;the second noise reduction block includes a second plurality of noise reduction pixels; andgenerating the denoised block includes: identifying a first filter coefficient based on a current pixel from the plurality of current pixels;identifying a first noise reduction pixel from the first plurality of noise reduction pixels;identifying a second noise reduction pixel from the second plurality of noise reduction pixels;producing a denoised pixel based on the filter coefficient, the current pixel, the first noise reduction pixel, and the second noise reduction pixel; andincluding the denoised pixel in the denoised block.
  • 8. The method of claim 7, wherein: generating the denoised block includes generating a variance based on the first noise reduction pixel and the second noise reduction pixel; andincluding the denoised pixel includes using the current pixel as the denoised pixel on a condition that the a difference between the current pixel and the denoised pixel is greater than a first threshold and the variance is greater than a second threshold.
  • 9. An apparatus for encoding a frame in a video stream, the flame having a plurality of blocks, the apparatus comprising: a memory; anda processor configured to execute instructions stored in the memory to: identifying a noise reduction frame;identify a first motion vector for encoding a current block of the plurality of blocks, wherein the first motion vector indicates a reference block in a reference frame, and wherein the noise reduction frame differs from the reference frame;identify a noise reduction block from the noise reduction frame such that the noise reduction block is collocated with the reference block;generate a denoised block using the current block and the noise reduction block;identify a second motion vector for encoding the denoised block based on the first motion vector;generate a residual block using the denoised block and the second motion vector; andgenerate an encoded block based on the residual block and the second motion vector; andwherein the current block includes a plurality of current pixels;the noise reduction block includes a plurality of noise reduction pixels; andthe processor is configured to generate the denoised block by: identifying a filter coefficient based on a current pixel from the plurality of current pixels and a noise reduction pixel from the plurality of noise reduction pixels;producing a denoised pixel based on the filter coefficient, the current pixel, and the noise reduction pixel;on a condition that a difference between the current pixel and the denoised pixel is less than a first threshold and a sum of squared errors value is less than a second threshold, including the denoised pixel in the denoised block;on a condition that the difference between the current pixel and the denoised pixel is greater than the first threshold, including the current pixel in the denoised block; andon a condition that the difference between the current pixel and the denoised pixel is equal to the first threshold, including the current pixel in the denoised block; andwherein the processor is configured to generate the denoised block by weighting the filter coefficient based on a weighting metric, wherein:on a condition that a magnitude of the first motion vector is greater than a third threshold and a difference between a sum of squared errors of the first motion vector and a sum of squared errors of a zero magnitude motion vector is greater than a fourth threshold, the weighting metric is zero; andon a condition that a magnitude of the first motion vector is less than the third threshold and a difference between a sum of squared errors of the first motion vector and a sum of squared errors of a zero magnitude motion vector is less than the fourth threshold, the weighting metric is the magnitude of the first motion vector.
  • 10. The apparatus of claim 9, wherein the processor is configured to identify the noise reduction block by using the first motion vector to identify the noise reduction block that corresponds with the current block on a condition a magnitude of the first motion vector is less than a first threshold.
  • 11. The apparatus of claim 9, wherein the processor is configured to identify the noise reduction block by using the first motion vector to identify the noise reduction block that corresponds with the current block on a condition that a difference between a sum of squared errors of the first motion vector and a sum of squared errors of a zero magnitude motion vector is greater than a second threshold.
  • 12. The apparatus of claim 9, wherein: the current block includes a plurality of current pixels;the noise reduction block includes a plurality of noise reduction pixels; andthe processor is configured to generate the denoised block by: identifying a filter coefficient based on a current pixel from the plurality of current pixels and a noise reduction pixel from the plurality of noise reduction pixels;producing a denoised pixel based on the filter coefficient, the current pixel, and the noise reduction pixel; andincluding the denoised pixel in the denoised block.
  • 13. The apparatus of claim 9, wherein the processor is configured to execute instructions stored in the memory to: determine whether an intra prediction mode is indicated by the first motion vector; andidentify a zero magnitude motion vector as the second motion vector if the intra prediction mode is indicated.
  • 14. The apparatus of claim 9, wherein the processor is configured to identify the noise reduction block by identifying a plurality of noise reduction blocks from a plurality of noise reduction frames that includes the noise reduction frame, wherein a first noise reduction block in the plurality of noise reduction blocks is associated with the reference block and a second noise reduction block in the plurality of noise reduction blocks is associated with another reference block.
  • 15. The method of claim 1, wherein the current block includes a plurality of current pixels, the noise reduction block includes a plurality of noise reduction pixels, and wherein generating the denoised block includes producing a denoised pixel based on a current pixel from the plurality of current pixels, and a noise reduction pixel from the plurality of noise reduction pixels, the method further comprising: on a condition that a difference between the current pixel and the denoised pixel is within an identified threshold, including the denoised pixel in the noise reduction block; andon a condition that a difference between the current pixel and the denoised pixel exceeds the identified threshold, including the current pixel in the noise reduction block.
  • 16. The method of claim 1, wherein the frame is a first frame from a sequence of frames from the video stream, and wherein generating the denoised block includes generating an updated noise reduction block based on the noise reduction block and the current block, the method further comprising: identifying a second frame from the sequence of frames, wherein the first frame precedes the second frame in the sequence of frames; andgenerating a second denoised block using a block from the second frame and the updated noise reduction block.
US Referenced Citations (269)
Number Name Date Kind
3825832 Frei et al. Jul 1974 A
4719642 Lucas Jan 1988 A
4729127 Chan et al. Mar 1988 A
4736446 Reynolds et al. Apr 1988 A
4816906 Kummerfeldt et al. Mar 1989 A
4868764 Richards Sep 1989 A
4891748 Mann Jan 1990 A
4924310 von Brandt May 1990 A
5068724 Krause et al. Nov 1991 A
5083214 Knowles Jan 1992 A
5091782 Krause et al. Feb 1992 A
5136371 Savatier et al. Aug 1992 A
5136376 Yagasaki et al. Aug 1992 A
5148269 de Haan et al. Sep 1992 A
5164819 Music Nov 1992 A
5270812 Richards Dec 1993 A
5274442 Murakami et al. Dec 1993 A
5278647 Hingorani et al. Jan 1994 A
5313306 Kuban et al. May 1994 A
5337086 Fujinami Aug 1994 A
5341440 Earl et al. Aug 1994 A
5361105 Iu Nov 1994 A
5365280 De Haan et al. Nov 1994 A
5377018 Rafferty Dec 1994 A
5398068 Liu et al. Mar 1995 A
5432870 Schwartz Jul 1995 A
5457780 Shaw et al. Oct 1995 A
5461423 Tsukagoshi Oct 1995 A
5488570 Agarwal Jan 1996 A
5512952 Iwamura Apr 1996 A
5561477 Polit Oct 1996 A
5568200 Pearlstein et al. Oct 1996 A
5576767 Lee et al. Nov 1996 A
5579348 Walker et al. Nov 1996 A
5589945 Abecassis Dec 1996 A
5623308 Civanlar et al. Apr 1997 A
5629736 Haskell et al. May 1997 A
5640208 Fujinami Jun 1997 A
5659539 Porter et al. Aug 1997 A
5686962 Chung et al. Nov 1997 A
5689306 Jung Nov 1997 A
5696869 Abecassis Dec 1997 A
5717791 Labaere et al. Feb 1998 A
5721822 Agarwal Feb 1998 A
5731840 Kikuchi et al. Mar 1998 A
5734744 Wittenstein et al. Mar 1998 A
5737020 Hall et al. Apr 1998 A
5748242 Podilchuk May 1998 A
5748247 Hu May 1998 A
5767909 Jung Jun 1998 A
5774593 Zick et al. Jun 1998 A
5793647 Hageniers et al. Aug 1998 A
5812197 Chan et al. Sep 1998 A
5818536 Morris et al. Oct 1998 A
5818969 Astle Oct 1998 A
5828370 Moeller et al. Oct 1998 A
5886742 Hibi et al. Mar 1999 A
5903264 Moeller et al. May 1999 A
5912676 Malladi et al. Jun 1999 A
5929940 Jeannin Jul 1999 A
5930493 Ottesen et al. Jul 1999 A
5946414 Cass et al. Aug 1999 A
5959672 Sasaki Sep 1999 A
5963203 Goldberg et al. Oct 1999 A
5969777 Mawatari Oct 1999 A
5985526 Tutt et al. Nov 1999 A
5987866 Weeger et al. Nov 1999 A
5991447 Eifrig et al. Nov 1999 A
5999641 Miller et al. Dec 1999 A
6005980 Eifrig et al. Dec 1999 A
6014706 Cannon et al. Jan 2000 A
6041145 Hayashi et al. Mar 2000 A
6061397 Ogura May 2000 A
6084908 Chiang et al. Jul 2000 A
6097842 Suzuki et al. Aug 2000 A
6100940 Dieterich Aug 2000 A
6108383 Miller et al. Aug 2000 A
6112234 Leiper Aug 2000 A
6115501 Chun et al. Sep 2000 A
6119154 Weaver et al. Sep 2000 A
6125144 Matsumura et al. Sep 2000 A
6141381 Sugiyama Oct 2000 A
6167164 Lee Dec 2000 A
6181822 Miller et al. Jan 2001 B1
6185363 Dimitrova et al. Feb 2001 B1
6188799 Tan et al. Feb 2001 B1
6233279 Boon May 2001 B1
6240135 Kim May 2001 B1
6272179 Kadono Aug 2001 B1
6277075 Torp et al. Aug 2001 B1
6285801 Mancuso et al. Sep 2001 B1
6289049 Kim et al. Sep 2001 B1
6292837 Miller et al. Sep 2001 B1
6327304 Miller et al. Dec 2001 B1
6359929 Boon Mar 2002 B1
6370267 Miller et al. Apr 2002 B1
6381277 Chun et al. Apr 2002 B1
6400763 Wee Jun 2002 B1
6414995 Okumura et al. Jul 2002 B2
6418166 Wu et al. Jul 2002 B1
6434197 Wang et al. Aug 2002 B1
6473463 Agarwal Oct 2002 B2
6522784 Zlotnick Feb 2003 B1
6529638 Westerman Mar 2003 B1
6535555 Bordes et al. Mar 2003 B1
6560366 Wilkins May 2003 B1
6621867 Sazzad et al. Sep 2003 B1
6661842 Abousleman Dec 2003 B1
6687303 Ishihara Feb 2004 B1
6694342 Mou Feb 2004 B1
6697061 Wee et al. Feb 2004 B1
6707952 Tan et al. Mar 2004 B1
6711211 Lainema Mar 2004 B1
6735249 Karczewicz et al. May 2004 B1
6765964 Conklin Jul 2004 B1
6798837 Uenoyama et al. Sep 2004 B1
6807317 Mathew et al. Oct 2004 B2
6826229 Kawashima et al. Nov 2004 B2
6904091 Schelkens et al. Jun 2005 B1
6904096 Kobayashi et al. Jun 2005 B2
6907079 Gomila et al. Jun 2005 B2
6934419 Zlotnick Aug 2005 B2
6985526 Bottreau et al. Jan 2006 B2
6985527 Gunter et al. Jan 2006 B2
6987866 Hu Jan 2006 B2
7027654 Ameres et al. Apr 2006 B1
7031546 Maeda et al. Apr 2006 B2
7054367 Oguz et al. May 2006 B2
7088351 Wang Aug 2006 B2
7116831 Mukerjee et al. Oct 2006 B2
7120197 Lin et al. Oct 2006 B2
7170937 Zhou Jan 2007 B2
7194036 Melanson Mar 2007 B1
7226150 Yoshimura et al. Jun 2007 B2
7236524 Sun et al. Jun 2007 B2
7277592 Lin Oct 2007 B1
7330509 Lu et al. Feb 2008 B2
7358881 Melanson Apr 2008 B2
7447337 Zhang et al. Nov 2008 B2
7492823 Lee et al. Feb 2009 B2
7499492 Ameres et al. Mar 2009 B1
7590179 Mukerjee Sep 2009 B2
7606310 Ameres et al. Oct 2009 B1
7620103 Cote et al. Nov 2009 B2
7627040 Woods et al. Dec 2009 B2
7657098 Lin et al. Feb 2010 B2
7751514 Tsuie et al. Jul 2010 B2
7885476 Zhang Feb 2011 B2
7916783 Gao et al. Mar 2011 B2
8045813 Lee et al. Oct 2011 B2
8121196 Katsavounidis et al. Feb 2012 B2
8200028 Gabso et al. Jun 2012 B2
8218629 Lee et al. Jul 2012 B2
8259819 Liu et al. Sep 2012 B2
8295367 Tang et al. Oct 2012 B2
8325805 Cho et al. Dec 2012 B2
8780971 Bankoski et al. Jul 2014 B1
8885706 Bankoski et al. Nov 2014 B2
20010022815 Agarwal Sep 2001 A1
20020031272 Bagni et al. Mar 2002 A1
20020036705 Lee et al. Mar 2002 A1
20020064228 Sethuraman et al. May 2002 A1
20020094130 Bruls et al. Jul 2002 A1
20020168114 Valente Nov 2002 A1
20020172431 Atkins et al. Nov 2002 A1
20030023982 Lee et al. Jan 2003 A1
20030039310 Wu et al. Feb 2003 A1
20030053708 Kryukov et al. Mar 2003 A1
20030053711 Kim Mar 2003 A1
20030081850 Karczewicz et al. May 2003 A1
20030142753 Gunday Jul 2003 A1
20030165331 Van Der Schaar Sep 2003 A1
20030189982 MacInnis Oct 2003 A1
20030194009 Srinivasan Oct 2003 A1
20030215014 Koto et al. Nov 2003 A1
20040013308 Jeon et al. Jan 2004 A1
20040017939 Mehrotra Jan 2004 A1
20040042549 Huang et al. Mar 2004 A1
20040047416 Tomita Mar 2004 A1
20040062307 Hallapuro et al. Apr 2004 A1
20040080669 Nagai et al. Apr 2004 A1
20040120400 Linzer Jun 2004 A1
20040179610 Lu et al. Sep 2004 A1
20040181564 MacInnis et al. Sep 2004 A1
20040184533 Wang Sep 2004 A1
20040228410 Ameres et al. Nov 2004 A1
20040240556 Winger et al. Dec 2004 A1
20050013358 Song et al. Jan 2005 A1
20050013494 Srinivasan et al. Jan 2005 A1
20050053294 Mukerjee et al. Mar 2005 A1
20050117653 Sankaran Jun 2005 A1
20050135699 Anderson Jun 2005 A1
20050147165 Yoo et al. Jul 2005 A1
20050169374 Marpe et al. Aug 2005 A1
20050196063 Guangxi et al. Sep 2005 A1
20050265447 Park Dec 2005 A1
20050276323 Martemyanov et al. Dec 2005 A1
20050276327 Lee et al. Dec 2005 A1
20050286629 Dumitras et al. Dec 2005 A1
20060013315 Song Jan 2006 A1
20060062311 Sun et al. Mar 2006 A1
20060093038 Boyce May 2006 A1
20060098737 Sethuraman et al. May 2006 A1
20060098738 Cosman et al. May 2006 A1
20060126962 Sun Jun 2006 A1
20060153301 Guleryuz Jul 2006 A1
20060182181 Lee et al. Aug 2006 A1
20060215758 Kawashima Sep 2006 A1
20060239345 Taubman et al. Oct 2006 A1
20060268990 Lin et al. Nov 2006 A1
20070009044 Tourapis et al. Jan 2007 A1
20070009171 Nakashizuka et al. Jan 2007 A1
20070025448 Cha et al. Feb 2007 A1
20070047648 Tourapis et al. Mar 2007 A1
20070081593 Jeong et al. Apr 2007 A1
20070098067 Kim et al. May 2007 A1
20070110152 Lee et al. May 2007 A1
20070140338 Bhaskaran et al. Jun 2007 A1
20070140342 Karczewicz et al. Jun 2007 A1
20070153899 Koto et al. Jul 2007 A1
20070171988 Panda et al. Jul 2007 A1
20070177673 Yang Aug 2007 A1
20070189735 Kawashima et al. Aug 2007 A1
20070201559 He Aug 2007 A1
20070230572 Koto et al. Oct 2007 A1
20070237241 Ha et al. Oct 2007 A1
20070253483 Lee et al. Nov 2007 A1
20070253490 Makino Nov 2007 A1
20070253491 Ito et al. Nov 2007 A1
20070274385 He Nov 2007 A1
20070274388 Lee et al. Nov 2007 A1
20080025398 Molloy et al. Jan 2008 A1
20080025411 Chen et al. Jan 2008 A1
20080080615 Tourapis et al. Apr 2008 A1
20080101469 Ishtiaq et al. May 2008 A1
20080130755 Loukas et al. Jun 2008 A1
20080159649 Kempf et al. Jul 2008 A1
20080170629 Shim et al. Jul 2008 A1
20080198931 Chappalli et al. Aug 2008 A1
20080212678 Booth et al. Sep 2008 A1
20080219351 Kim et al. Sep 2008 A1
20080253457 Moore Oct 2008 A1
20080279279 Liu et al. Nov 2008 A1
20080298472 Jain et al. Dec 2008 A1
20090003440 Karczewicz et al. Jan 2009 A1
20090003717 Sekiguchi et al. Jan 2009 A1
20090034617 Tanaka Feb 2009 A1
20090161770 Dong et al. Jun 2009 A1
20090185058 Vakrat et al. Jul 2009 A1
20090190660 Kusakabe et al. Jul 2009 A1
20090196351 Cho et al. Aug 2009 A1
20090287493 Janssen et al. Nov 2009 A1
20090316793 Yang et al. Dec 2009 A1
20100022815 Chikamatsu et al. Jan 2010 A1
20100027906 Hara et al. Feb 2010 A1
20100208944 Fukunishi Aug 2010 A1
20110007799 Karczewicz et al. Jan 2011 A1
20110116549 Sun May 2011 A1
20110141237 Cheng et al. Jun 2011 A1
20110228843 Narroschke et al. Sep 2011 A1
20110229029 Kass Sep 2011 A1
20110268182 Joshi Nov 2011 A1
20120008870 Nguyen et al. Jan 2012 A1
20120039383 Huang et al. Feb 2012 A1
20120063513 Grange et al. Mar 2012 A1
20120081566 Cote et al. Apr 2012 A1
20120081580 Cote et al. Apr 2012 A1
20120082580 Tsuboi et al. Apr 2012 A1
20130114679 Wilkins et al. May 2013 A1
Foreign Referenced Citations (31)
Number Date Country
0634873 Sep 1998 EP
1351510 Oct 2003 EP
1365590 Nov 2003 EP
1511319 Mar 2005 EP
1555832 Jul 2005 EP
1564997 Aug 2005 EP
1838108 Sep 2007 EP
1840875 Oct 2007 EP
2076045 Jul 2009 EP
61092073 May 1986 JP
2217088 Aug 1990 JP
06038197 Feb 1994 JP
8280032 Oct 1996 JP
09179987 Jul 1997 JP
11262018 Sep 1999 JP
11289544 Oct 1999 JP
11313332 Nov 1999 JP
11513205 Nov 1999 JP
2005503737 Feb 2005 JP
2005308623 Nov 2005 JP
100213018 Aug 1999 KR
200130916 Apr 2001 KR
WO0150770 Jul 2001 WO
WO02089487 Nov 2002 WO
WO03026315 Mar 2003 WO
WO2006602377 Jun 2006 WO
WO2006083614 Aug 2006 WO
WO2007052303 May 2007 WO
WO2008005124 Jan 2008 WO
WO2010077325 Jul 2010 WO
WO2012123855 Sep 2012 WO
Non-Patent Literature Citations (66)
Entry
Mahmoudi, Mona et al.; “Fast Image and Video Denoising via Nonlocal Means of Similar Neighborhoods”, IEEE Signal Processing Letters, vol. 12, No. 12, Dec. 2005.
“Recent Trends in Denoising Tutorial: Selected Publications”; 2007 IEEE International Symposium on Information Theory (ISIT2007). http://www.stanford.edu/˜slansel/tutorial/publications.htm. Feb. 2012.
“Video denoising” http://en.wikipedia.org/wiki/Video—denoising.com. Feb. 2012.
Nokia, Inc., Nokia Research Center, “MVC Decoder Description”, Telecommunication Standardization Sector, Study Period 1997-2000, Geneva, Feb. 7, 2000, 99 pp.
Stiller, Christoph; “Motion-Estimation for Coding of Moving Video at 8 kbit/s with Gibbs Modeled Vectorfield Smoothing”, SPIE vol. 1360 Visual Communications and Image Processing 1990, 9 pp.
Chen, Xing C., et al.; “Quadtree Based Adaptive Lossy Coding of Motion Vectors”, IEEE 1996, 4 pp.
Wright, R. Glenn, et al.; “Multimedia—Electronic Technical Manual for ATE”, IEEE 1996, 3 pp.
Schiller, H., et al.; “Efficient Coding of Side Information in a Low Bitrate Hybrid Image Coder”, Signal Processing 19 (1990) Elsevier Science Publishers B.V. 61-73, 13 pp.
Strobach, Peter; “Tree-Structured Scene Adaptive Coder”, IEEE Transactions on Communications, vol. 38, No. 4, Apr. 1990, 10 pp.
Steliaros, Michael K., et al.; “Locally-accurate motion estimation for object-based video coding”, SPIE vol. 3309, 1997, 11 pp.
Martin, Graham R., et al.; “Reduced Entropy Motion Compensation Using Variable Sized Blocks”, SPIE vol. 3024, 1997, 10 pp.
Schuster, Guido M., et al.; “A Video Compression Scheme With Optimal Bit Allocation Among Segmentation, Motion, and Residual Error”, IEEE Transactions on Image Processing, vol. 6, No. 11, Nov. 1997, 16 pp.
Liu, Bede, et al.; “New Fast Algorithms for the Estimation of Block Motion Vectors”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 2, Apr. 1993, 10 pp.
Kim, Jong Won, et al.; “On the Hierarchical Variable Block Size Motion Estimation Technique for Motion Sequence Coding”, SPIE Visual Communication and Image Processing 1993, Cambridge, MA, Nov. 8, 1993, 29 pp.
Guillotel, Philippe, et al.; “Comparison of motion vector coding techniques”, SPIE vol. 2308, 1994, 11 pp.
Orchard, Michael T.; “Exploiting Scene Structure in Video Coding”, IEEE 1991, 5 pp.
Liu, Bede, et al.; “A simple method to segment motion field for video coding”, SPIE vol. 1818, Visual Communications and Image Processing 1992, 10 pp.
Ebrahimi, Touradj, et al.; “Joint motion estimation and segmentation for very low bitrate video coding”, SPIE vol. 2501, 1995, 12 pp.
Karczewicz, Marta, et al.; “Video Coding Using Motion Compensation With Polynomial Motion Vector Fields”, IEEE COMSOC EURASIP, First International Workshop on Wireless Image/Video Communications—Sep. 1996, 6 pp.
Wiegand, Thomas, et al.; “Rate-Distortion Optimized Mode Selection for Very Low Bit Rate Video Coding and the Emerging H.263 Standard”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 6, No. 2, Apr. 1996, 9 pp.
Wiegand, Thomas, et al.; “Long-Term Memory Motion-Compensated Prediction”, Publication Unknown, Date Unknown, 15 pp.
Zhang, Kui, et al.; “Variable Block Size Video Coding With Motion Prediction and Motion Segmentation”, SPIE vol. 2419, 1995, 9 pp.
Chen, Michael C., et al.; “Design and Optimization of a Differentially Coded Variable Block Size Motion Compensation System”, IEEE 1996, 4 pp.
Orchard, Michael T.; “Predictive Motion-Field Segmentation for Image Sequence Coding”, IEEE Transactions on Circuits and Systems for Video Technology, vol. 3, No. 1, Feb. 1993, 17 pp.
Nicolas, H., et al.; “Region-based motion estimation using deterministic relaxation schemes for image sequence coding”, IEEE 1992, 4 pp.
Luttrell, Max, et al.; “Simulation Results for Modified Error Resilient Syntax With Data Partitioning and RVLC”, ITU-Telecommunications Standardization Sector, Study Group 16, Video Coding Experts Group (Question 15), Sixth Meeting: Seoul, South Korea, Nov. 2, 1998, 34 pp.
A High Efficient Method for Parallelizing Reconstructor & Loop Deblocking Filter on Multi-core Processor Platform.
An Optimized In-Loop H.264 De-Blocking Filter on Multi-Core Engines.
Architectures for Efficient Partitioning of Video Coding Algorithms—H. 264 decoder.
Bankoski et al. “Technical Overview of VP8, an Open Source Video Codec for the Web”. Dated Jul. 11, 2011.
Bankoski et al. “VP8 Data Format and Decoding Guide” Independent Submission. RFC 6389, Dated Nov. 2011.
Bankoski et al. “VP8 Data Format and Decoding Guide; draft-bankoski-vp8-bitstream-02” Network Working Group. Internet-Draft, May 18, 2011, 288 pp.
Dai, JingJing et al., “Film Grain Noise Removal and Synthesis in Video Coding”, 2010 IEEE, pp. 890-893.
Hsiand, Anatialiasing Spatial Scalable Subband/Wavelet Coding Using H.264/AVC, 4 pages.
Implementors' Guide; Series H: Audiovisual and Multimedia Systems; Coding of moving video: Implementors Guide for H.264: Advanced video coding for generic audiovisual services. H.264. International Telecommunication Union. Version 12. Dated Jul. 30, 2010.
International Telecommunications Union, ITU-T, Telecommunication Standardization Section of ITU, “Series H: Audiovisual and Multimedia Systems, Infrastructure of audiovisual services—Coding of moving video”, Mar. 2010, 676 pp.
Lee, Yung-Lyul; Park, Hyun Wook; “Loop Filtering and Post-Filtering for Low-Bit-Rates Moving Picture Coding”, Signal Processing: Image Communication 16 (2001) pp. 871-890.
Lihua Zhu, Guangfei Zhu, Charles Wang; Implementation of video deblocking filter on GPU Apr. 8, 2008.
Method for unloading YUV-filtered pixels from a deblocking filter for a video decoder, Oct. 11, 2006.
Mohammed Aree A., et al., “Adaptive Quantization for Video Compression in Frequency Domain”, Retrieved from the internet <URL http://www.univsul.org/Dosekan—Mamostakan—U/acs15.pdf>.
Mozilla, “Introduction to Video Coding Part 1: Transform Coding”, Video Compression Overview, Mar. 2012, 171 pp.
ON2 Technologies Inc., White Paper TrueMotion VP7 Video Codec, Jan. 10, 2005, 13 pages, Document Version:1.0, Clifton Park, New York.
Overview; VP7 Data Format and Decoder. Version 1.5. On2 Technologies, Inc. Dated Mar. 28, 2005.
Raza, Zahir, “Design of Sample Adaptive Product Quantizers for Noisy Channels”, IEEE Transactions on Communications, vol. 53, No. 4, Apr. 2005, pp. 576-580.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 11. Dated Mar. 2009.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. International Telecommunication Union. Version 12. Dated Mar. 2010.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Amendment 2: New profiles for professional applications. International Telecommunication Union. Dated Apr. 2007.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video. H.264. Advanced video coding for generic audiovisual services. Version 8. International Telecommunication Union. Dated Nov. 1, 2007.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Amendment 1: Support of additional colour spaces and removal of the High 4:4:4 Profile. International Telecommunication Union. Dated Jun. 2006.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual service. H.264. Version 1. International Telecommunication Union. Dated May 2003.
Series H: Audiovisual and Multimedia Systems; Infrastructure of audiovisual services—Coding of moving video; Advanced video coding for generic audiovisual services. H.264. Version 3. International Telecommunication Union. Dated Mar. 2005.
Sye-Hoon Oh, et al. “An Adaptive Sharpening Filter Using Quantization Step Size and Pixel Variance in H.264/AVC”, Consumer Electronics (ICCE), IEEE International Conference on Jan. 9, 2011.
Tan et al., “Classified Perceptual Caoding with Adaptive Quantization,” IEEE Transactions on Circuits and Systems for Video Technology, Aug. 1996, pp. 375-388, vol. 6 No. 4.
Tanaka et al., A New Combination of 1D and 2D Filter Banks for effective Multiresolution Image Representation, ICIP, 2008, pp. 2820-2823, IEEE.
Tanaka et al., An adaptive extension of combined 2D and 1D directional filter banks, Circuits and Systems, 2009. ISCAS 2009. IEEE International Symposium on, On pp. 2193-2196.
VP6 Bitstream & Decoder Specification. Version 1.02. On2 Technologies, Inc. Dated Aug. 17, 2006.
VP6 Bitstream & Decoder Specification. Version 1.03. On2 Technologies, Inc. Dated Oct. 29, 2007.
VP8 Data Format and Decoding Guide. WebM Project. Google On2. Dated: Dec. 1, 2010.
Wenger et al.; RTP Payload Format for H.264 Video; The Internet Society; 2005.
Wiegand et al, “Overview of the H 264/AVC Video Coding Standard,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 13, No. 7, pp. 568, 569, Jul. 1, 2003.
Wiegand, Digital Image Communication: Pyramids and Subbands, 21 pages.
Wiegand, Thomas, Study of Final Committee Draft of Joint Video Specification (ITU-T Rec. H.264 | ISO/IEC 14496-10 AVC), Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), JVT-F100, Dec. 5, 2002.
Wu, Xiaolin, et al., “Calic-A Context based Adaptive :oss;ess Image Codec”, 1996 IEEE International Conferece, vol. 4, 4 pages.
Zhi Liu, Zhaoyang Zhang, Liquan Shen, Mosaic Generation in H.264 Compressed Domain, IEEE 2006.
Soon Hie Tan et al., “Classified Perceptual Coding with Adaptive Quantization”, IEEE Transactions on Circuits and Systems for Video Technology, IEEE Service Center, Piscataway, NJ, US, vol. 6, No. 4, Aug. 1, 1996.
ISR and the Written Opinion of the International Searching Authority for International Application No. PCT/US2012/055386, mailed Nov. 7, 2012, 17 pages.