Removing noise from a video signal before the signal is encoded is an important feature of most modern video encoding architectures, since it can considerably enhance coding efficiency while at the same time improve objective and subjective quality of the resulting encoded video signal. Digital still or video pictures can contain noise due to the capturing process, the analog to digital conversion process, transcoding along the delivery channel, transmission effects, or other reasons. Of course, noise causes effects that a user can perceive in the video display, causing a visually displeasing picture. It can also have a severe adverse effect in many video applications, particularly video compression. Due to its random nature, noise can considerably decrease spatial and temporal correlation, thus limiting the coding efficiency of such noisy video signals. Furthermore, at low bit rates, the uncorrelated compression artifacts between successive pictures coded with different encoding modules can lead to temporal artifacts in the way of flicker or pulsation between pictures. Thus, it is desirable to remove noise. However, it is important to also not remove any of the important details of the picture, such as edges or texture.
Several conventional algorithms exist where removal of noise, or de-noising, is performed using spatial or/and temporal methods. Such noise reduction schemes can be spatial in nature, addressing one frame at time. Conventional spatial algorithms tend to remove spatially redundant information and noise. Conventional temporal schemes, apart from removing noise and enhancing details such as edges that may be lost due to spatial filtering, also tend to enhance temporal correlation between adjacent frames. However, these conventional architectures consider this process outside the encoder loop. As a result, no consideration of the artifacts introduced by the encoding process is made.
Many noise reduction schemes in the context of pre-processing that occur prior to compression address coding efficiency and improved subjective quality compared to coding an unfiltered source. In this context, knowledge of the encoding process could lead to further improvements both subjectively and objectively, but to date have not been considered. Conventional temporal filtering methods may consider motion compensated methods for advanced performance. However, feedback typically exists from the encoder in terms of adapting certain parameters of the filtering process, such as those based on the target bit rate, increasing or decreasing the filtering applied on the current picture. These methods still do not include any information about the nature of previously coded pictures.
Conventional schemes can be used for addressing coding efficiency and subjective quality compared to coding an unfiltered source, but none exists that adequately addresses temporal artifacts that are apparent as defects in the resulting video picture. More specifically, it can be observed that at very low bitrates using fixed GOP (Group Of Pictures) structures (i.e. a repetitive sequence of intra-coded (I) pictures followed by a sequence of inter-coded (P and B) pictures) can result in distinct temporal artifacts (i.e. a pumping/beating/pulsation picture effect) at GOP boundaries. These artifacts are a result of the different coding artifacts introduced by the different picture/prediction coding types, and the lack of temporal correlation at GOP boundaries. These artifacts are apparent in all existing video compression standards, such as MPEG-2[1] and MPEG-4, but can be even more prominent for standards such as JVT/H.264/MPEG AVC [2], where additional processes are applied for intra and inter coding, including the prediction process and de-blocking. These artifacts can persist even though a conventional spatio-temporal pre-filtering scheme is used, regardless of the resulting increase in temporal correlation between adjacent original filtered pictures.
Therefore, given conventional solutions, there still exists a need for adequately removing such artifacts from a video picture. As will be seen, the invention resolves this need in an elegant manner.
According to the invention, knowledge of the encoding process is used to provide further improvements to video quality, both subjectively and objectively. The invention relates to the general class of hybrid motion compensated entropy based encoders referred generally in this document as “MPEG Encoders”, which may include MPEG, MPEG2, and other encoder standards. The invention provides an additional pre-filtering step that is introduced prior to the encoder, where previously reconstructed pictures are also used for temporal filtering within this process/loop. This has the implication that temporal correlation will also increase between adjacent pictures regardless of the coding type since the artifacts introduced by the encoding process are also considered by the filter. This may be applied on a regional basis or on a frame by frame basis, depending on the application. This may also be applied in a pixel by pixel, block by block, or macroblock by macroblock basis, depending on the application. For the purpose of description in this discussion, let us assume that the terms block and macroblock are interchangeable and are meant to denote some two-dimensional region of the picture of any size. Also, the processing of picture data may be performed from top to bottom of a frame or in or other orientations. The input picture frame data and pre-encoded frame data may be processed linearly in time, or may be processed in a non-linear fashion. Those skilled in the art will understand that, given the description below, various processing methods can be easily derived to process incoming video signals together with pre-encoded picture data to produce an improved input for an encoder process. Such methods would not depart from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.
According to the invention, a novel architecture, which is called in-loop temporal pre-filtering, is proposed where a novel in-loop temporal filter is provided. In one embodiment, an in-loop temporal pre-filter is provided for filtering a video signal prior to digital encoding. The filter includes one input configured to receive one or more input video picture frames, and another input for receiving one or more reconstructed pictures from an encoding process. Within the in-loop temporal filter, logic is configured to combine data related to at least one input video frame and at least one reconstructed picture from the encoding process to output a pre-filtered video signal for use in an encoding process. This logic may be configured in hardware, coded in software, or alternatively configured with a combination of hardware and software to produce the optimum result. Those skilled in the art will understand that there may be various configurations that can be made using logic hardware as well as software without departing from the spirit and scope of the invention, which, again, is defined in the appended claims and their equivalents.
According to the invention, the novel in-loop temporal filter can be configured to process a single pre-encoded frame, such as reconstructed frames stored in a frame memory, as described in the embodiment below. Alternatively, the novel filter can be configured to process multiple pre-encoded frames. Similarly, the novel in-loop temporal filter can be configured to process either a single input picture frame or multiple input frames. The invention provides an in-loop temporal filter that is able to combine input picture frame data and encoded frame data in a novel way to produce an improved pre-filtered input to an encoder process that can then produce an encoded output with improved temporal correlation and reduced artifacts in the output video signal from the encoder process.
In another embodiment, the in-loop temporal pre-filter may be configured to continue and further refine the picture conditioning operations begun by conventional video pre-processing, where the input pictures may be first temporally and then, in some embodiments, spatially filtered. They may also be first spatially and then temporally filtered, or simultaneously temporally and spatially filtered then output to the in-loop temporal filter to provide a signal for use in conventional encoder architectures. Referring to
Regarding
The method, system, and program product may be implemented in or otherwise in conjunction with most any encoder configuration. Such an encoder may be configured according to existing video coding standards such as the ISO MPEG and ITU-T H.26X standards, or architectures (Microsoft's VC1, On2 etc). Referring again to
In operation, the picture input 112 is received by the in-loop temporal filter 114. According to the invention, this filter operation uses new pictures that are received as an input, as well as reconstructed block data from storage in frame memory 116, and temporally filters the two inputs resulting in an input to the encoder of current block data 118. In alternative embodiments, the in-loop temporal filter is further configured to receive an input of motion vectors from the Motion Estimation unit 120, or alternatively from statistics storage related to the pre-filtering process, and these separate embodiments are described further below in connection with
The in-loop temporal filter, 114, still referring to
In one embodiment, and in contrast to a video pre-processor that is configured to process pixel data in raster scan order (one horizontal line of video at a time), the in-loop filter may operate as a pre-processor that processes data in block order, one block at a time or multiple blocks at a time, possibly in a row of blocks across the picture. In another embodiment, the in-loop filter may operate as a pre-processor that processes data on a block by block basis, considering either one block at a time or multiple blocks. Furthermore, whether the in-loop filter processes the data at any block size or level, the order in which the picture frame data is processed may be linear, or non-linear. Still further, an image in a picture frame may be processed from top to bottom, bottom to top, or in other known manners of processing video picture data, which can vary among particular applications. In either configuration, the invention is not limited to any particular order in which picture frame data is processed, or the manner or scope in which the picture frame is processed. Those skilled in the art will understand that the invention, given this detailed description, may take on different configurations to optimize video input data to an encoder process, again to ultimately produce an encoded output with improved temporal correlation and reduced artifacts in the output video signal from the encoder process, without departing from the spirit and scope of the invention, which, again, is defined by the appended claims.
Referring to
Referring for
Referring now to
Also, according to the invention, the temporal filters 114, 108, can be used as illustrated, or can be combined to reduce the complexity of a system. For example, two buffers in the scheme can be reconstructed frame buffers that could also contain previously coded pictures coming from the encoder, this apart from previously filtered pictures. Motion estimation and compensation could be performed using a filtered picture at time t-1, but could also use the same picture after encoding, while a different weight would be used for generating the final filtered picture.
Referring now to
In one embodiment of
In one embodiment, still referring to
As discussed above, the temporal filter 108 may be incorporated into or its functions performed within the in-loop temporal filter 114. In such a configuration, still referring to
In another embodiment of
Still referring to
In yet another embodiment of
In either configuration of Figure D, any number of paths can be combined to produce an improved input to the encoder process, and ultimately produce an encoded output with improved temporal correlation and reduced artifacts in the output video signal from the encoder process. Those skilled in the art will understand that there are various combinations and permutations that can be configured to produce such an output, and the invention is not limited to any particular combination.
Referring now to
According to the invention, these several embodiments may be combined in other combinations and permutations in order to improve the pre-filtering process to produce a signal that is ultimately encoded in the encoding process. Those skilled in the art will further understand that such pre-filtering process is unique in the way that the in-loop temporal filter receives reconstructed pictures from within the encoding process, combines them with the video input signal, whether spatially or temporally filtered or not, and temporally filters the signals, combining the picture frames in a manner according to a novel process, to produce a pre-filtered input for ultimate use in the encoding process.
More specifically, in the filtering architecture of the final filtered picture {circumflex over (f)}(x, y, t) is generated as:
where f′Sp1(x, y, t) and f′Sp2(x, y, t) are spatially filtered versions of the original picture, f′T(x, y, t+k)are motion compensated (MC) predictions from previous and past frames and wSp1, wSp2, wk are weights associated with each spatial and temporal prediction. According to the invention, the in-loop filtering can be performed as:
where {circumflex over (f)}′T(x, y, t+k) is the coded version of f′T(x, y, t+k) and ŵk the associated weight. Weights in general can be determined based on correlation of current picture versus original reference and coded reference (the reconstructed), distortion of coded reference (the reconstructed) versus its original, motion, texture etc. Correlation and low motion for example may suggest an increase in weighting parameters, while high texture may require a more careful adjustment of such weights.
Such filtering could, for instance, include weighted averaging between the current input picture and previously reconstructed picture. This weighting process may be based on different temporal correlation metrics such as motion characteristics, color and other factors.
The in-loop pre-filtering is performed within the encoder, and therefore is able to take advantage of already existing elements within this process, and in particular the motion estimation and compensation modules. For example, for intra slices these modules remain idle, while it may be more efficient if these were reused for performing motion estimation and compensation for filtering purposes. Generally, although the previously reconstructed data used by the encoder pre-processing is the co-located block to the current block being processed, the data used in analyzing motion characteristics, is not necessarily the co-located data but from a region around the predicted motion vector.
Those skilled in the art will understand that there are different configurations possible that may be simply a different arrangement or combination of the different components of the embodiments described herein. Such changes do not, however, depart from the spirit and scope of the invention, which is defined by the appended claims and their equivalents.
A system configured according to the invention results in a dramatic increase in correlation of the pictures prior to encoding as it takes into account the already encoded pictures with the input pictures. This is a method of reducing the distinction of temporal artifacts (i.e. a pumping/beating/pulsation picture effect), especially at the GOP boundary and giving a clearer and more vivid video presentation. According to the invention, in operation, the intermediate temporal filter 114 operates to process in temporal domain and generates a picture, adaptively based on motion content and texture content. The generated picture is the combination of the current input picture and the previously reconstructed picture. Still referring to
Referring again to the diagram of
In operation, and still referring to
The encoding process generates compressed bitstreams for transmission on a channel or storage in an external medium. During the encoding process, motion vectors are generated from pictures in the sequence. These pictures may not be contiguous in time; for example, motion vectors can be generated between pictures ith and (i+n)th, where n can take a value greater than or equal to 1. An input picture 118 of a subsequent picture is transmitted to the motion estimation unit 120 of the encoder 102. Motion vectors 148 are formed as the output of the motion estimation unit 120. These vectors are used by the motion compensation unit 142 to retrieve block data from previous and/or future pictures, referred to as “reference” data, for output by this unit. One output of the motion compensation unit 142 is negatively or positively summed with the output from the motion estimation unit 120 and goes to the input of the discrete cosine transformer 122. The output of the discrete cosine transformer 122 is quantized in quantizer 126. The output of the quantizer 126 is split into two outputs, 125 and 129. One output 125 goes to a downstream element, illustrated here as variable length decoder 130 for further compression and processing before transmission. The other output 129 goes through reconstruction of the encoded block of pixels for storage in frame memory 116. In the encoder shown for purposes of illustration, this second output 129 goes through an inverse quantization 132 and an inverse discrete cosine transform 134 to return a lossy version of the difference block. This data is summed with the output of the motion compensation unit 142 and returns a lossy version of the original picture to the frame memory 116.
The invention may be implemented, for example, in hardware, software (perhaps as an operating system element), or a combination of the two, a dedicated processor, or a dedicated processor with dedicated code. If in software, the invention is a process that executes a sequence of machine-readable instructions, which can also be referred to as code. These instructions may reside in various types of signal-bearing media. In this respect, the invention provides a program product comprising a signal-bearing medium or signal-bearing media tangibly embodying a program of machine-readable instructions executable by a digital processing apparatus to perform a novel method of pre-filtering video signals prior to being encoded.
The signal-bearing medium may comprise, for example, memory in server. The memory in the server may be non-volatile storage, a data disc, or even memory on a vendor server for downloading to a processor or a quantizer for installation. Alternatively, the instructions may be embodied in a signal-bearing medium such as the optical data storage disc. Alternatively, the instructions may be stored on any of a variety of machine-readable data storage mediums or media, which may include, for example, a “hard drive”, a RAID array, a RAMAC, a magnetic data storage diskette (such as a floppy disk), magnetic tape, digital optical tape, RAM, ROM, EPROM, EEPROM, flash memory, magneto-optical storage, paper punch cards, or any other suitable signal-bearing media including transmission media such as digital and/or analog communications links, which may be electrical, optical, and/or wireless. As an example, the machine-readable instructions may comprise software object code, compiled from a language such as “C” or “C++”. Additionally, the program code may, for example, be compressed, encrypted, or both, and may include executable files, script files and wizards for installation, as in Zip files and cab files. As used herein the term machine-readable instructions or code residing in or on signal-bearing media include all of the above means of delivery.
Referring to
Referring to
Referring to
Referring to
In one embodiment, if N=1 in Formula (1) and combine the first three items as, input_pel, output of the conventional spatio-temporal pre-filter, 106, as shown in
f(x,y)=(weight*input_pel(x,y)+stationary*rec_pel(x,y)/(stationary+weight) (2)
where weight and stationary perform weighting and normalization. Referring to
Referring to
Referring back to step 212, if luma is being processed, then Cmp is set equal to 1 in step 252, and it is determined in step 254 whether VPP data is available. If no, then, again, predetermined values are used, in this example stationary=0, weight=1 and threshold difference thrd_diff=3, and the stationary and weight computation is complete. If VPP values are available, then the novel process of filtering by using motion and high frequency data are performed beginning at step 226. Again, the thresholds chosen here are intended only as examples, and other predetermined thresholds can be used, and can also change throughout the process. The actual numbers relate to percentages of motion content in a block and percentages of frequency content in a block. For example, if every pixel moved in a given frame, then the motion value would be 100; if none moved, it would be zero; if 7% moved, it would be 7; and so on. Again, the values are only examples, and in no way limit the scope of the invention.
In step 226, it is determined if the motion value is less than minimum of one value, 3 for example, and a threshold, namely, min(3,thrd). In step 228, it is then determined if the high frequency is less than 7. If the high frequency is less than 7, then the stationary value stationary is set to 1, and the weight value, weight, to 1 in step 232. If not, then stationary is set to 2 and weight to 3 in step 230. Referring back to step 226, if the motion is not less than min(3,thrd), then the process goes to step 234, where it is determined if the motion value is less than min(6, thrd). If it is, then the process proceeds to step 236, it is then determined if the high frequency is less than 7. If the high frequency is less than 7, then the stationary value is set to 2, and the weight value to 3 in step 240. If not, then the stationary value is set to 1 and the weight to 2 in step 238.
Referring back to step 234, if the motion is not less than min(6, thrd), then the process goes to step 242, where it is determined if the motion value is less than thrd. If it is, then the process proceeds to step 246, it is then determined if the high frequency is less than 7. If the high frequency is less than 7, then the stationary value is set to 1, and the weight value to 2 in step 250. If not, then the stationary value is set to 1 and the weight to 3 in step 248.
Referring back to step 242, if the motion value is not less then the thrd threshold value, then the stationary value is set to 0, and the weight value is set to 1 in step 244.
Generally referring to
Referring again to
In (2), set weight+stationary=1, we have the in-loop pre-filtering feature control and its gain control in a form of, respectively,
f(x,y)=(1−stationary)*input_pel(x,y)+stationary*rec_pel(x,y) (3)
f(x,y)=(1−gain)*input_pel(x,y)+gain*rec_pel(x,y) (4)
Referring to
The difference of these inputs is derived in arithmetic unit 302, and the result is sent to multiplier 304 and absolute value unit 306. The result is the difference in co-located pixels in a frame, and these differences are used by the in-loop temporal pre-filter according to the invention to produce a higher quality output video picture. The absolute value result is transmitted to low pass filter 308. It will be understood by those skilled in the art that such a low pass filter 308 has taps, namely [1,3,8,3,1], which are actually divided by 16 in practice, and it will be further understood that these values are typical examples and are in no way limiting to the invention. The low pass filter then transmits the result to the motion look-up table (MLUT) 310 to generate a value M from the MLUT. The frame changes are then manifested in this M value, which indicate whether there has been any substantial change in the current frame compared to a previous frame or frames. This value is then input into the selection unit 312 to contribute to the ultimate output signal, as described further below.
Simultaneously, the Fc value is fed into the 7-Tap filter 314, which is defined as a low pass filter. It will be understood by those skilled in the art that the 7-tap filter has tap values [−1,0,9,16,9,0,−1], which are actually divided by 32 in practice, or in integer arithmetic shifted left by 5 (>>5), and it will be further understood that these values are typical examples and are in no way limiting to the invention. The output from the 7-Tap filter is then compared to the Fc value in adder 316, then sent to gain unit 318, illustrated as a 6[4,2] bit value to produce a high frequency detail signal. Gain unit 318 controls the amount of high frequency relevant for texture detection. This value is set externally based on the statistical characteristics derived from the encoding process of the input sequence. For example, if the input sequence is determined to have global low texture, the value of gain 318 is set high so that even small textures are taken into account. The value of 318 is a range that may be from 0.25 to 15.75, for example. This high frequency result is sent to selection unit 312 along with value M. The selection unit receives as inputs motion thresholds M0, M1 and M2, as well as high frequency thresholds H0, H1 and H2, where all of the thresholds are illustrated as 8 bit values. These thresholds are predetermined in a manner to effectively choose stationarity coefficients to be used to produce an 8 bit output S shown here. The function of the stationary unit is to convert the high frequency and motion values into a stationary signal having coefficient values. High frequency values are representative of picture texture, where the amount of high frequency in a picture is an indicator of detailed textures. Those skilled in the art will understand that the thresholds may vary from application to application, and that different thresholds will produce different output values of S. The invention is not limited to any particular thresholds, or to any particular size inputs or outputs to the selection unit 312.
The stationary signal S is then multiplied by multiplication unit 304, to give an output value that is the product of the differential signal from the addition unit 302, and is added to the Fc input value to give a filtered output, shown as an 8 bit value. This 8 bit value is defined as
Filt=(1−S)*Fc+S*Fp
In this embodiment, the values of (1−S) and S add up to a constant value or unity. The result of Filt is then transmitted to summation unit and summed with the Fc value, then directed down two paths, one 9 bit path shown, and another path where the absolute value of the result of Filt is calculated in absolute value unit 322, then shifted right according to a 2 bit value in shifting unit 323. This value is determined by the global amount of texture and motion detected by external means. In normal operation, an external process (not defined in this document) analyses the statistical characteristics of the input picture sequence to determine the amount of low texture, high texture, motion content, color content, etc. Transition coefficients are then determined in look up table (LUT)324 to give an 8 bit output, T, which is the final value to be blended with the original input with the filter block Filt. The value of T is multiplied in multiplier 326 with the result from adder 320. This result is then added to Fc in adder 328, giving the final output:
In-loop=(1−T)*Fc+T*Fp
This is the output 118 of the in-loop pre-filter to be used in the encoding process. Again, in this structure, the quantities (1-T) and T add up to unity (1.0).
Referring to
As an example, below is a table of threshold values and corresponding values of S that may result:
The bit encoder then encodes the separate inputs, and then sends the results to look up table (LUT) 346. The results are used to determine the output S, the stationary signal I, discussed above. I is essentially an abstraction of formulated values in a simplified manner. I can change, as the shape of the curve that represents I can change with respect to changes in thresholds. Referring to
In operation, for example, using the following GOP structure below, the In-Loop pre-filter is capable of consistently using frames with consecutive index numbers, for example:
I0P1P2P3P4P5P6P7I8P9P10P11P12P13 . . .
In this embodiment, pixels in the co-located blocks are processed, one at a time, according to the amount of motion and high frequency in a small neighborhood around the currently processed pixel. The difference between co-located pixels in the block of same polarity fields separated by one frame time is used as basis for motion detection. Temporal differences of neighboring pixels are filtered as to produce a small region-of-interest indication of motion.
In one case when there is no motion in the picture, the frame differences will be small and due only to coding noise. In this situation the value of M may be greater than 0.5 and typically close but not equal to 1.0, so that the resulting value stored in the frame buffer is, for example, 0.5*Cf+0.5*Pf. In another extreme case when large motion is detected, the value of M is 0.0 and therefore Fc goes through the system unchanged.
Referring again to
The logic in
As it is indicated in
The temporal filter signal Filt is further qualified by comparing it to the unfiltered current data Fc as indicated in the lower part of
The conceptual operation of the In-Loop filter depicted in
Referring now to
Filt=S·(Fp−Fc)
InLoopOutput=T·(Filt−Fc)
Since
(Filt−Fc)=S·(Fp−Fc)
After simplification and rearrangement to account for hardware dependencies, the equation becomes:
InLoopOutput=S·(Fp−Fc)·T+Fc
An additional simplification arises because the output of the MTF look-up table is held to a threshold. Since the contents of the MTF look-up table are monotonically decreasing with increasing input value, the MTF LUT can be omitted and the thresholding performed on the output of the motion low pass filter. Furthermore, since the output of the Stationarity Selection table is a thresholded version of the output of MLUT, a single look-up table can be used for this purpose. In contrast to the embodiment illustrated in
Referring to
The difference of these inputs is derived in arithmetic unit 402, and the result is sent to multiplier 404 and absolute value unit 406. The result is the difference in co-located pixels in a frame, and these differences are used by the in-loop temporal pre-filter according to the invention to produce a higher quality output video picture. The absolute value result is transmitted to low pass filter 408. The low pass filter then transmits the result to the selection unit. Unlike the embodiment of
Simultaneously, the Fc value is fed into the 7-Tap filter 414, which is defined as a low pass filter. The output from the 7-Tap filter is then compared to the Fc value in adder 416, then sent to gain unit 418, illustrated as a 6[4,2] bit value to produce a high frequency detail signal. Gain unit 418 controls the amount of high frequency relevant for texture detection. This value is set externally based on the statistical characteristics of the input sequence; for example, if the input sequence is determined to have global low texture, the value of gain 418 is set high so that even small textures are taken into account. The value of 418 may range from 0.25 to 15.75, for example. This high frequency result is sent to selection unit 412 along with value M. The selection unit receives as inputs motion thresholds M0, M1, and M2, as well as high frequency thresholds H0, H1, and H2, where all of the thresholds are illustrated as 8 bit values. These thresholds are predetermined in a manner to effectively choose stationarity coefficients to be use to produce an output, and 8 input output S shown here. The function of the stationary unit is to convert the high frequency and motion values into a stationary signal having coefficient values. High frequency values are representative of picture texture, where the amount of high frequency if a picture is an indicator of detailed textures.
The stationary signal S is then multiplied by multiplication unit 404, to give an output value that is the product of the differential signal from the addition unit 402, or (Fp-Fc), and is transmitted to the absolute value unit ABS 422, and also transmitted to multiplication unit 418. The ABS 422 then sends the absolute value result to the shift right unit 423, and that result is sent to look up table (LUT) 424. Here, the maximum threshold value is set, as discussed above. In this embodiment, since the contents of the MTF (Motion Transfer Function) LUT are monotonically decreasing with increasing input value, the MTF LUT [See LUT 324,
In this embodiment, the 8 bit value is now defined as
Filt=S*(Fp−Fc)+Fc
and,
In-loop=T*(Filt−Fc)+Fc
However, since now
(Filt−Fc)=S*(Fp−Fc)
This gives:
In-loop=T*S*(Fp−Fc)+Fc
This simplifies to:
In-loop=S*(Fp−Fc)*T+Fc
Using alternative embodiment illustrated in
The invention has been described in the context of a pre-filtering loop for an encoder, and the embodiments above are intended as examples of implementations of the invention. Those skilled in the art will understand that the invention actually has broader scope, which is defined by the appended claims and all equivalents.
The present application is a continuation of and claims priority under 35 U.S.C. §120 to U.S. application Ser. No. 11/230,943, entitled “Method, System and Device for Improving Video Quality through In-Loop Temporal Pre-Filtering,” filed by Alexandros Michael Tourapis et al., on Sep. 19, 2005, now U.S. Pat. No. 8,218,655, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5053857 | Sandbank et al. | Oct 1991 | A |
5493513 | Keith et al. | Feb 1996 | A |
5493514 | Keith et al. | Feb 1996 | A |
5528238 | Nickerson | Jun 1996 | A |
5532940 | Agarwal et al. | Jul 1996 | A |
5535138 | Keith | Jul 1996 | A |
5537338 | Coelho | Jul 1996 | A |
5539662 | Nickerson | Jul 1996 | A |
5539663 | Agarwal | Jul 1996 | A |
5559722 | Nickerson | Sep 1996 | A |
5638068 | Nickerson | Jun 1997 | A |
5719966 | Brill et al. | Feb 1998 | A |
6239847 | Deierling | May 2001 | B1 |
6281942 | Wang | Aug 2001 | B1 |
6996186 | Ngai et al. | Feb 2006 | B2 |
7068722 | Wells | Jun 2006 | B2 |
7346226 | Shyshkin | Mar 2008 | B2 |
7403568 | Dumitras et al. | Jul 2008 | B2 |
7430335 | Dumitras et al. | Sep 2008 | B2 |
7860334 | Li et al. | Dec 2010 | B2 |
8218655 | Tourapis et al. | Jul 2012 | B2 |
20030160899 | Ngai et al. | Aug 2003 | A1 |
20040212734 | MacInnis et al. | Oct 2004 | A1 |
20050036558 | Dumitras et al. | Feb 2005 | A1 |
20050036704 | Dumitras et al. | Feb 2005 | A1 |
20070002946 | Bouton et al. | Jan 2007 | A1 |
20070064815 | Alvarez et al. | Mar 2007 | A1 |
20070133896 | Boroczky et al. | Jun 2007 | A1 |
20130003872 | Alvarez et al. | Jan 2013 | A1 |
20130182971 | Leontaris et al. | Jul 2013 | A1 |
Entry |
---|
Final Office Action for U.S. Appl. No. 11/230,943, mailed on Dec. 6, 2011. |
Non-Final Office Action for U.S. Appl. No. 11/230,943, mailed on Aug. 5, 2011. |
Final Office Action for U.S. Appl. No. 11/230,943, mailed on Feb. 2, 2011. |
Non-Final Office Action for U.S. Appl. No. 11/230,943, mailed on Aug. 18, 2010. |
Number | Date | Country | |
---|---|---|---|
20130003872 A1 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11230943 | Sep 2005 | US |
Child | 13545930 | US |