This disclosure relates to digital image and video processing and, more particularly, enhanced image/video quality through artifact evaluation.
Digital video capabilities may be incorporated into a wide range of devices, including digital televisions, digital direct broadcast systems, wireless communication devices, personal digital assistants (PDAs), laptop computers, desktop computers, digital cameras, digital recording devices, mobile or satellite radio telephones, and the like. Digital video and picture devices can provide significant improvements over conventional analog video and picture systems in creating, modifying, transmitting, storing, recording and playing full motion video sequences and pictures. Video sequences (also referred to as video clips) are composed of a sequence of frames. A picture can also be represented as a frame. Any frame or part of a frame from a video or a picture is often called an image.
Digital devices such as mobile phones and hand-held digital cameras can take both pictures and/or video. The pictures and video sequences may be stored and transmitted to another device either wirelessly or through a cable. Prior to transmission the frame may be sampled and digitized. Once digitized, the frame may be parsed into smaller blocks and encoded. Encoding is sometimes synonymous with compression. Compression can reduce the overall (usually redundant) amount of data (i.e., bits) needed to represent a frame. By compressing video and image data, many image and video encoding standards allow for improved transmission rates of video sequences and images. Typically compressed video sequences and compressed images are referred to as encoded bitstream, encoded packets, or bitstream. Most image and video encoding standards utilize image/video compression techniques designed to facilitate video and image transmission with less transmitted bits than those used without compression techniques.
In order to support compression, a digital video and/or picture device typically includes an encoder for compressing digital video sequences or compressing a picture, and a decoder for decompressing the digital video sequences. In many cases, the encoder and decoder form an integrated encoder/decoder (CODEC) that operates on blocks of pixels within frames that define the video sequence. In standards, such as the International Telecommunication Union (ITU) H.264 and Moving Picture Experts Group (MPEG)-4, Joint Photographic Experts Group (JPEG), for example, the encoder typically divides a video frame or image to be transmitted into video blocks referred to as “macroblocks.” A macroblock is typically 16 pixels high by 16 pixels wide. Various sizes of video blocks may be used. Those ordinarily skilled in the art of image and video processing recognize that the term video block, or image block may be used interchangeably. Sometimes to be explicit in their interchangeability, the term image/video block is used. The ITU H.264 standard supports processing 16 by 16 video blocks, 16 by 8 video blocks, 8 by 16 image blocks, 8 by 8 image blocks, 8 by 4 image blocks, 4 by 8 image blocks and 4 by 4 image blocks. Other standards may support differently sized image blocks. Someone ordinarily skilled in the art sometimes use video block or frame interchangeably when describing an encoding process, and sometimes may refer to video block or frame as video matter. In general, video encoding standards support encoding and decoding a video unit, wherein a video unit may be a video block or a video frame.
For each video block in a video frame, an encoder operates in a number of “prediction” modes. In one mode, the encoder searches similarly sized video blocks of one or more immediately preceding video frames (or subsequent frames) to identify the most similar video block, referred to as the “best prediction block.” The process of comparing a current video block to video blocks of other frames is generally referred to as block-level motion estimation (BME). BME produces a motion vector for the respective block. Once a “best prediction block” is identified for a current video block, the encoder can encode the differences between the current video block and the best prediction block. This process of using the differences between the current video block and the best prediction block includes a process referred to as motion compensation. In particular, motion compensation usually refers to the act of fetching the best prediction block using a motion vector, and then subtracting the best prediction block from an input video block to generate a difference block. After motion compensation, a series of additional encoding steps are typically performed to finish encoding the difference block. These additional encoding steps may depend on the encoding standard being used. In another mode, the encoder searches similarly sized video blocks of one or more neighboring video blocks within the same frame and uses information from those blocks to aid in the encoding process.
In general, as part of the encoding process, a transform of the video block (or difference video block) is taken. The transform converts the video block (or difference video block) from being represented by pixels to being represented by transform coefficients. A typical transform in video encoding is called the Discrete Cosine Transform (DCT). The DCT transforms the video block data from the pixel domain to a spatial frequency domain. In the spatial frequency domain, data is represented by DCT block coefficients. The DCT block coefficients represent the number and degree of the spatial frequencies detected in the video block. After a DCT is computed, the DCT block coefficients may be quantized, in a process known as “block quantization.” Quantization of the DCT block coefficients (coming from either the video block or difference video block) removes part of the spatial redundancy from the block. During this “block quantization” process, further spatial redundancy may sometimes be removed by comparing the quantized DCT block coefficients to a threshold. If the magnitude of a quantized DCT block coefficient is less than the threshold, the coefficient is discarded or set to a zero value.
However, block quantization at the encoder may often cause different artifacts to appear at the decoder when reconstructing the video frames or images that have been compressed at the encoder. An example of an artifact is when blocks appear in the reconstructed video image, this is known as “blockiness.” Some standards have tried to address this problem by including a de-blocking filter as part of the encoding process. In some cases, the de-blocking filter removes the blockiness but also has the effect of smearing or blurring the video frame or image, which is known as a blurriness artifact. Hence, image/video quality suffers either from “blockiness” or blurriness from de-blocking filters. A method and apparatus that could reduce the effect of coding artifacts on the perceived visual quality may be a significant benefit.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings and claims. In general, an image/video encoding and decoding system employing an artifact evaluator that processes video blocks may enhance image/video quality. During an encoding process, a texture decoder and a video block or frame resulting from an inter-coding or intra-coding prediction mode synthesizes an un-filtered reconstructed video block or frame. The un-filtered reconstructed video block or frame is passed through an artifact filter to yield a filtered reconstructed video block or frame. The artifact filter may be a de-blocking filter or configured to be a de-blocking filter. If the artifact filter is a de-blocking filter or configured to be one, it may suppress blockiness. However, after filtering, the resulting filtered reconstructed video block or frame may be blurry. Current encoding methods and standards are limited because they do not have a way to “adaptively” change how an in-loop memory buffer is updated. Because of this limitation in current encoding methods and standards, poor image/video quality is propagated to other frames, especially for inter-coding prediction mode.
The use of an artifact evaluator may overcome the limitations of the current encoding methods and standards. The use of an artifact evaluator evaluates and determines based on perceived image/video quality when it is better to use the output of an artifact filter such as a de-blocking filter, or when it is better to use the input of an artifact filter such as a de-blocking filter to update the in-loop memory buffer. The use of an artifact evaluator may not only enhance the image/video quality of current methods and standards of the current frame, but may also offer an additional advantage of preventing poor image/video quality propagation to subsequent processed frames, especially for inter-coding prediction mode. The artifact evaluator may also be standard compliant.
For each un-filtered reconstructed video block or frame and each filtered reconstructed video block or frame, an artifact metric may be generated to measure the amount of an artifact. The artifact metric may be a non-original reference (NR) or full-original reference (FR). The difference between an NR and FR artifact metric may be based on the availability of an original video block or frame. Artifact metric generators generate the artifact metrics and are part of an artifact evaluator. After artifact metrics are generated, a decision is made based on perceived image/video quality as to which video block or frame is used in updating an in-loop memory buffer. There are variations on how to generate an artifact metric and various ways to determine if a filtered reconstructed video block or frame or an unfiltered video block or frame is used in updating an in-loop memory buffer. These variations are illustrated in the embodiments below.
In one embodiment, an artifact metric generator is used in a video encoder to generate NR artifact metrics.
In another embodiment, an artifact metric generator is used in a video encoder to generate FR artifact metrics.
In a further embodiment, either a NR or an FR artifact metric may be used to measure the amount of blockiness.
In a further embodiment, a configurable artifact metric generator may be used to output multiple artifact metrics at once.
In even a further embodiment, a decision to determine which video block or frame should be used to update an in-loop memory buffer is based on only one type of metric, e.g., a blockiness (or-deblockiness) metric.
In another embodiment, a decision to determine which video block or frame should be used to update an in-loop memory buffer may be based on multiple types of metrics, e.g., a blockiness (or-deblockiness) metric and a blurriness metric.
Some of the embodiments described above may be combined to form other embodiments.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description, drawings and claims.
a illustrates a version of an artifact evaluator which uses one type of metric to make an output decision.
b illustrates a version of an artifact evaluator which uses multiple types of metrics to make an output decision.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment, configuration or design described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments or designs. In general, described herein, is a novel method and apparatus to not only evaluate artifacts but to improve the perceived image/video quality as a result of the evaluation.
The source device 4a and/or the receive device 18a in whole or in part may comprise a “chip set” or “chip” for a mobile phone, including a combination of hardware, software, firmware, and/or one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or various combinations thereof. In addition, in another embodiment, the image/video encoding and decoding system 2 may be in one source device 4b and one receive device 18b as part of a CODEC 24. Thus, source device 4b and receive device 18b illustrate that a source and receive device may contain at least one CODEC 24 as seen in
Texture encoder 47 has a DCT block 48 which transforms the input x (the video block or difference block) from the pixel domain to a spatial frequency domain. In the spatial frequency domain, data is represented by DCT block coefficients. The DCT block coefficients represent the number and degree of the spatial frequencies detected in the video block. After a DCT is computed, the DCT block coefficients may be quantized by quantizer 50, in a process is known as “block quantization.” Quantization of the DCT block coefficients (coming from either the video block or difference video block) removes part of the spatial redundancy from the block. During this “block quantization” process, further spatial redundancy may sometimes be removed by comparing the quantized DCT block coefficients to a threshold. This comparison may take place inside quantizer 50 or another comparator block (not shown). If the magnitude of a quantized DCT block coefficient is less than the threshold, the coefficient is discarded or set to a zero value.
After block quantization, the resulting output may be sent to two separate structures: (1) a texture decoder 65, and (2) an entropy encoder 55. Texture decoder 65 comprises a de-quantizer 66 which aids in the production of a reconstructed image/video block or frame; to be used with a coding prediction mode. The entropy encoder 55 produces a bitstream for transmission or storage. Entropy encoder 55 may contain a scanner 56 which receives the block quantized output and re-order it for more efficient encoding by variable length coder (VLC) 58. VLC 58 may employ the use of run-length and huffman coding techniques to produce an encoded bit-stream. The encoded bitstream is sent to output buffer 60. The bitstream may be sent to rate controller 62. While maintaining a base quality, rate controller 62 budgets the number of quantization bits used by quantizer 50. Entropy encoding is considered a non-lossy form of compression. Non-lossy compression signifies that the data being encoded may be identically recovered if it is decoded by an entropy decoder without the encoded data having been corrupted. Entropy encoder 55 performs non-lossy compression.
Lossy compression means that as a result of the encoding, an input, x, will not produce an identical copy of x even though the encoded input has not been corrupted. The reconstructed input has “lost” part of its information. Texture encoder 47 performs lossy compression. A typical image/video encoder 23 usually has a local texture decoder 65 to aid in the compensation of both the inter-coding and intra-coding prediction modes. de-quantizer 66, inverse DCT 68, and the output of switch 46 that is sent to adder 69 work together to decode the output of texture encoder 47 and reconstruct the input x that went into texture encoder 47. The reconstructed input, y, looks similar to x but is not exactly x. A general image/video “decoder” typically comprises the functionality of the de-quantizer 66, inverse DCT 68, and the output of switch 46 that is sent to adder 69.
In some standards, such as MPEG-4 and H.263 baseline profile, the use of a de-blockng filter 70 is not present. In MPEG-4 and H.263 baseline profile, a de-blocking filter is optional as a post-processing step in the video decoder of a receive device. Other standards, such as ITU H.264, Windows Media 9 (WM9), or Real Video 9 (RV9), support enabling the use of de-blocking filter 70, known as an “in-loop” de-blocking filter. De-blocking filter 70 is used to remove the “blockiness” that appears when the reconstructed input, y, has blocks present. As mentioned previously, in some cases, the de-blocking filter removes the blockiness but also has the effect of blurring the video frame or image. There is a tradeoff between the blockiness artifact and the blurriness artifact. Enabling de-blocking filter 70 may reduce blockiness, but it may degrade the perceived visual quality by blurring the image. The standards that enable the use of de-blocking filter 70 always update memory buffer 81 with filtered reconstructed video block or frame, {tilde over (y)}. Of great benefit would be to find a way to determine when it is better to use the output of a de-blocking filter 70, or when it is better to use the input of de-blocking filter 70 to update memory buffer 81. Various embodiments in this disclosure identify and solve the limitation of previous standards. Various embodiments in this disclosure teach ways to evaluate and determine when it is better to use the output of an artifact filter such as de-blocking filter 70, or when it is better to use the input of an artifact filter such as de-blocking filter 70.
As mentioned, in some standards, when de-blocking filter 70 is enabled, the output may be sent to memory buffer 81. Inside memory buffer 81 there may be two memory buffers: (1) reconstructed new frame buffer 82; and (2) reconstructed old frame buffer 84. Reconstructed new frame buffer 82, stores the currently processed reconstructed frame (or partial frame). Reconstructed old frame buffer 84 stores a past processed reconstructed frame. The past processed reconstructed frame is used as a (reconstructed) reference frame. The reconstructed reference frame may be a frame that is before or after the current frame in input frame buffer 42. The current frame (or a video block from the current frame) or differences between the current frame and the reconstructed reference frame (or a video block from the difference block) is what is “currently” being encoded. After the current frame has finished encoding and before the next frame in input from input frame buffer 42 is fetched to be encoded, the reconstructed old frame buffer 84 is updated with a copy with the contents of the reconstructed new frame buffer 82.
Reconstructed new frame buffer 82 may send the reconstructed video block it received to be used in spatial predictor 86. Reconstructed old frame buffer 84 sends a past processed reconstructed video block to MEC (motion estimation and compensation block) 87. MEC block comprises motion estimator 88 and motion compensator 90. motion estimator 88 generates motion vectors (MV) 92 and motion vector predictors (MVP) 94 that may be used by motion compensator 90 to compensate for differences from other frames than the one being encoded. MVs 92 may also be used by entropy encoder 55. In some standards, such as ITU H.264, the output of spatial predictor 86 is used in intra-frame prediction mode and fed back both to subtractor 44 and adder 69. In some standards, such as MPEG-4 or JPEG, there is no spatial predictor 86.
One of the most commonly used metrics to measure image and video quality is the peak signal to noise ratio (PSNR) and is defined in Equation 1 as follows:
where PKS stands for peak pixel value squared and is usually 2552.
The coding_error is often computed by taking the Mean Squared Error (MSE) of the difference in pixels between a pair of video blocks. The pair may consist of a video block, x, from the original reference frame and a video block, y, from a reconstructed frame. The PSNR is a function of the coding_error between a pair of video blocks. Coding_error indicates the amount of similarity between pixels in the video blocks being compared. More similar pixels lead to a larger PSNR. A smaller PSNR means that less pixels are similar. In addition, the PSNR may also be used to indicate a measure of the average coding error. The average coding_error is denoted by <coding_error>, and may be generated by taking a running average of the coding_error. In this later case, the PSNR is a measure of the coding_error over the frame. Even though PSNR is a function of the coding_error, a smaller coding_error does not always yield good image and video quality as perceived by the user. As an example, an image of a tiled wall or floor may appear blurry after a de-blocking filer has been applied. The boundary between tiles, the edge, may only represent a small fraction of the overall image. Thus, when the coding_error is computed pixel by pixel, the resulting PSNR may indicate that the image and video quality is good even though the edges of the tiles are blurry. If the de-blocking filter is not applied to the reconstructed image, the tiles edges may appear blocky. In a case such as this, the PSNR is undesirably limiting in measuring perceived image and video quality.
The limitation of the PSNR may be overcome by a new metric, the artifact signal to noise ratio (ASNR). The ASNR metric offers a method to measure the lack (or presence) of an artifact. A version of the ASNR metric, ASNR(y or {tilde over (y)}), may be generated by artifact metric generator 101 of
Two frameworks that may be used when measuring encoding artifacts or coding_error are: (1) non-original reference (NR); or (2) full-original reference (FR). An example of the NR framework is shown in
In general, the output of an artifact metric generator is a measure of the amount of the artifact. When the artifact is blockiness, an instantiation of the ASNR metric may be used. The instantiation is the de-blocking signal to noise ratio (DSNR) metric, which measures the lack or presence of blockiness. In an NR framework the generation performed by an artifact metric generator is only based on a reconstructed frame. If the artifact filter 72 is a de-blocking filter, the top artifact metric generator 101 in
If the original input, x, is fed into artifact metric generator 101 in
In order to measure the amount of blockiness in an image or frame, a Mean Square Difference of Slope (MSDS) metric is sometimes used to determine the amount of blockiness in the reconstructed image or frame. However, the MSDS metric does not differentiate between blockiness in the actual texture of the original image or frame and blockiness introduced by the block quantization step of a video encoder. Moreover, the use of the MSDS metric does not exploit the use of human visual perception.
The limitation of the MSDS may be overcome by the DSNR metric. The DSNR metric may have various forms since it is used to better evaluate the image and video quality of blocked-based video encoders by accounting for the different types of blockiness and taking into account human visual perception. As mentioned, the DSNR metric is an instantiation of the ASNR metric.
A general form of the artifact signal to noise ratio (ASNR) metric is shown in Equation 2 as follows:
where PKS stands for peak pixel value squared and is usually 2552. The numerator of Equation 2 contains the product of PKS, WS, WP, and WT. WS, WP, and WT are weights selected to account for the spatial (WS), perceptual (WP) and temporal (WT) factors that affect image and video quality. The denominator of Equation 2 is F(x, y) and may be a joint or disjoint function of x and y. If x is not available, F(x, y) may be replaced by F(y). It should also be noted that y, the un-filtered reconstructed video block or frame may be replaced by {tilde over (y)}, the filtered reconstructed video block or frame.
One of the functions that may be used for F(x, y) is the MSDS_error(x, y). The usage of the MSDS_error(x, y) is typically done when the DSNR metric instantiation of the ASNR metric is used. In one aspect, the MSDS_error(x, y) may be the squared error between the MSDS(x) and MSDS (y). In another aspect, the MSDS_error(x, y) may be the absolute value of the error between the MSDS(x) and MSDS(y). The MSDS_error(x, y) may have other variants, but in an FR framework, will often be a function of the error between MSDS(x) and MSDS (y). In an NR framework, the MSDS_error(x, y) may be replaced with at least two different MSDS calculations that may be compared to each other. For example, MSDS(y) and MSDS({tilde over (y)}) may be used. MSDS(x) is a function of an input video block, x, from the original reference frame. MSDS(y or {tilde over (y)}) is a function of a video block, y or {tilde over (y)}, from a reconstructed frame.
The Mean Square Difference of Slopes (MSDS) is often calculated at all video block boundaries, and with three different types of slopes near the boundary between a pair of adjacent video blocks. The three different types of slopes are usually calculated between pixels on the same pixel row. Consider two adjacent video blocks with L rows directly next to each other. The last two columns of pixels in the first video blocks are next to the first two columns of pixels in the second video block. A Type—1 slope is calculated between a pixel in the last column and a pixel in the penultimate column of the first video block. A Type—2 slope is calculated between a pixel in the first column and a pixel in the second column of the second video block. A Type—3 slope is calculated between a pixel in the first column of the second video block and the last column of the first video block.
Typically the MSDS is illustrated as being calculated over a common row of pixels as in Equation 3:
where pixels(i) represent the ith group of pixels that is involved in the calculation in any of the L rows, in this case any ith group contains six pixels. For each video block boundary, MSDS(pixels(i)) is averaged over L rows. An overall (average) MSDS for each video block and video block boundary would be written as in Equation 4 below:
where L is the number of rows that defines the boundary of the video block.
However, since a column is an array of pixels, all slopes of the same type may be calculated in parallel. This parallel calculation is called a gradient. Thus, when calculating the MSDS near the boundary between a pair of adjacent video blocks, three gradients may be computed: (1) pre_gradient (for Type 1 slopes); (2) post_gradient (for Type 2 slopes); and (3) edge_gradient (for Type 3 slopes). The computed gradient is a vector. As such, parallel instances of Equation 4 may be calculated with Equation (5) below:
where b represents any video block. MSDS(b) is calculated at the boundaries between a pair of adjacent video blocks for an ith group of pixels (i=1 . . . L).
By squaring the L2 norm of the difference vector (edge_gradient-_average (pre_gradient, post_gradient)), Equation 5 may be implemented. A norm is a mathematical construct. The L2 norm is a type of norm and may be used to calculate the magnitude of a vector. To calculate the magnitude, the L2 norm takes the square root of the sum of squares of the components of a vector. Although the MSDS is often calculated as shown in Equations 4 and 5, variants may exist which do not square the difference between the edge_gradient and the average of the pre_gradient and post_gradient. For example, an L1 norm may be used instead. The embodiments enclosed herein, encompass and apply to any variant that uses Type 1, Type 2 and Type 3 slopes.
As mentioned, using the MSDS for F(x, y) yields an instantiation of the ASNR metric, the DSNR metric. Similarly, using other known metrics in place of F(x, y) may be used to yield other instantiations of the ASNR metric. The general FR form of the de-blocking signal to noise ratio (DSNR) metric is defined in Equation 6 below,
The general NR form of the DSNR metric is defined in Equation 7 below,
The denominator of the DSNR metric shown by Equation 7 may be carried out in artifact metric generator 101a. The input is REC (a reconstructed video block or frame), and thus F(x, y) in Equation 2 is only a function of REC, F(y or {tilde over (y)}).
Divider 109 divides the output of numerator producer 107 (PKS*WS*WP*WT) by the output of MSDS 112, MSDS(REC (y or {tilde over (y)}). Log block 114 takes 10*log10 of the result produced by divider 109. The output of log block 114 is the DSNR metric, which is an instantiation of ASNR(y or {tilde over (y)}) computed by artifact metric generator 101.
In general, selection process of the weights such as those in WVS bank 103 for an ASNR metric is done in such a way to improve image/video quality. For the DSNR metric, the right amount of de-blockiness is emphasized and the right amount of blurriness is de-emphasized. The selection process is based on graph 118 of
ASNR(x, y)=10*[(log10(PKS)+log10(WS)+log10(WP)+log10(WT)−log10(F(x, y))]
Taking the logarithm of the numerator components and denominator shows that the effect of the weights is either additive, subtractive, or has no effect (when the weight value is 1).
The choice of input parameters varies. However, choices for ZS, ZP, and ZT may be as follows. ZS may be generated by a multi-step process explained through an example. Consider a current video block to be encoded E that has neighbors D (to its left), B (above it), and A (located near its upper left diagonal). Part of video block E and part of video block A are used to form video block AE. Similarly, video blocks BE and DE may be formed. DCT's may be computed for each of video blocks AE, BE, and DE and the average of the DCT's may be used for ZS. ZP may be generated by computing an average DCT over an entire frame. ZT may be generated by computing the difference between the average DCT in one frame and the average DCT in another frame.
F_err block 123 may be used to compute the error between an instance of a function of the original video block or frame and an instance of a function of a reconstructed video block or frame. The difference between the functions is computed by subtractor 44 and norm factor (NF) 128 can be selected for a particular choice of F. Artifact metric generator 121 may implement the functions of artifact metric generator 101. This may be seen by recognizing that in the architecture of artifact metric generator 101a of
Since artifacts may affect image and video quality, a way to use the metrics to aid in evaluating perceived image and video quality during the encoding process is desired. The use of artifact evaluator 140 in
Using the artifact evaluator of
In addition, since some standards, such as ITU H.264, WM9, and RV9 support the use of de-blocking filters, the use of artifact evaluator 140 is standard compliant. For example, the decision of which reconstructed (filtered or un-filtered) video block or frame in the encoder was used to update memory buffer 81 may be passed to the video decoder. Thus, for a video encoder and video decoder to be “in-sync” the decision may be inserted into a video decoders' header information, i.e., it can be inserted as part of the bitstream that tells the video decoder if the de-blocking filter is on or off.
A comparison 160 between MA[1](x, {tilde over (y)}) and a blockiness threshold is made to check the amount of blockiness present in filtered reconstructed video block(or frame) {tilde over (y)}. If the comparison 160 is true (YES), then {tilde over (y)} meets an “acceptable” perceived image and video quality. A further comparison 162 between MA[2](x, {tilde over (y)}) and a blurriness threshold is made to check the amount of blurriness present in {tilde over (y)}. If the comparison 162 is true (YES) then {tilde over (y)} meets an “acceptable” perceived image and video quality for both blurriness and blockiness. The resulting output QA (x, y, {tilde over (y)}) becomes 164 {tilde over (y)} and the encoder memory buffer (see
If either comparison 160 or 162 is false (NO), then a comparison 166 between MA[1](x, y) and a blurriness threshold is made to check the amount of blurriness present in un-filtered reconstructed video block(or frame) y. If the comparison 166 is true (YES), then y meets an “acceptable” perceived image and video quality. A further comparison 168 between MA[2](x, y) and a blurriness threshold is made to check the amount of blurriness present in y. If the comparison 168 is true (YES), then y meets an “acceptable” perceived image and video quality for both blurriness and blockiness. The resulting output QA (x, y, {tilde over (y)}) becomes 170 y, and the encoder memory buffer (see
A number of different embodiments have been described. The techniques may be capable of improving video encoding by improving image and video quality through the use of an artifact evaluator in-loop during the encoding process. The techniques are standard compliant. The techniques also may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the techniques may be directed to a computer-readable medium comprising computer-readable program code (also may be called computer-code), that when executed in a device that encodes video sequences, performs one or more of the methods mentioned above.
The computer-readable program code may be stored on memory in the form of computer readable instructions. In that case, a processor such as a DSP may execute instructions stored in memory in order to carry out one or more of the techniques described herein. In some cases, the techniques may be executed by a DSP that invokes various hardware components such as a motion estimator to accelerate the encoding process. In other cases, the video encoder may be implemented as a microprocessor, one or more application specific integrated circuits (ASICs), one or more field programmable gate arrays (FPGAs), or some other hardware-software combination. These and other embodiments are within the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
7027661 | Estevez et al. | Apr 2006 | B2 |
7116828 | Wells | Oct 2006 | B2 |
20030053708 | Kryukov et al. | Mar 2003 | A1 |
20040062310 | Xue et al. | Apr 2004 | A1 |
Number | Date | Country |
---|---|---|
0571171 | Nov 1993 | EP |
0961229 | Dec 1999 | EP |
0022834 | Apr 2000 | WO |
0120912 | Mar 2001 | WO |
2004054274 | Jun 2004 | WO |
2005086490 | Nov 2007 | WO |
Number | Date | Country | |
---|---|---|---|
20070206871 A1 | Sep 2007 | US |