1. Field of the Invention
The present invention relates to methods for improving display quality of an image. More specifically, the present invention is directed to methods of performing cross color and/or cross luminance suppression to improve display quality.
2. Description of the Prior Art
In composite video television systems such as NTSC and PAL, a luminance signal and a chrominance signal share a portion of the available bandwidth. In NTSC, for example, chrominance information is encoded through a sub-carrier having frequency equaling 3.57955 MHz. Within the chrominance band extending from roughly 2.3 MHz to 4.2 MHz, the luminance spectrum overlaps that chrominance spectrum. This overlap results in signal interference.
It is well-known that a television decoder is implemented to extract both luminance information and chrominance information from the received composite signal; however, a typical simple television decoder cannot discern which of the higher frequency components are luminance information and which are chrominance information. As a result, such a television decoder generates incorrect chrominance information owing to the interference introduced via the luminance spectrum. The term “cross color” is commonly referred to as corruption of the chrominance spectrum caused by the misinterpretation of high-frequency luminance information as wanted chrominance information. Conversely, the term “cross luminance” is commonly referred to as corruption of the luminance spectrum caused by the misinterpretation of chrominance information as high-frequency luminance information.
Some conventional methods reduce cross color by operating upon chrominance information encoded on the chrominance subcarrier prior to demodulation into baseband chrominance information. These methods typically incorporate cross color suppression into the decoding process, focusing on improving the separation of the chrominance and luminance information to reduce both cross color and cross luminance.
However, cross color suppression is very desirable in applications where only demodulated baseband chrominance information is available, especially where demodulation was performed without much regard for suppressing cross color. In such applications, for practical reasons, cross color suppression must be performed in the baseband domain.
As such, Faroudja describes a technique for suppressing cross color in U.S. Pat. No. 5,305,120, the contents of which are hereby incorporated by reference. Although Faroudja suggests a feasible approach for post-decoding cross color suppression, a more optimized motion detection algorithm is desired in order to minimize possible error occurrence in the outcome of cross color suppression caused by over-simplified stationary image judgment.
It is therefore one of the objectives of the claimed invention to provide methods of suppressing cross color and/or cross luminance of an image by introducing a well-designed motion detection algorithm.
According to one exemplary embodiment of the present invention, a method for processing an image in a video data is provided. The video data comprises a plurality of frames. The method comprises: obtaining a plurality of differences, each difference in the plurality of differences being obtained from two frames that are one frame apart, wherein the each difference in the plurality of differences is between pixel information of one pixel from a plurality of pixels in one of the two frames, and a corresponding pixel in the other frame of the two frames; examining a first criterion with a summation of the plurality of differences; and performing cross color suppressing operation on a current frame of the plurality of frames according to a set of stationary image judgment information comprising the result of the first criterion examination.
According to another exemplary embodiment of the present invention, a method for processing an image in a video data is provided. The video data comprises a plurality of frames. The method comprises: obtaining a first difference set between pixel information of one of the frames and pixel information of another one of the frames, wherein the two frames involving the first difference set are one frame apart, and the first difference set comprises a plurality of differences, each difference of the plurality of differences being between pixel information of a pixel in one of the two frames involving the first difference set and pixel information of a corresponding pixel in the other one of the two frames involving the first difference set; summing the absolute values of each difference in the plurality of differences of the first difference set to thereby obtain a sum of absolute differences; examining a first criterion according to the sum of absolute differences; and performing cross color suppressing operation on pixel information of a current frame of the plurality of frames according to a set of stationary image judgment information comprising the result of the first criterion examination.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Please refer to
Before further explaining the operation of apparatus 100, certain preliminary knowledge pertaining image frame composition should be understood. Here, the well-known NTSC systems are hereby taken as an example for explanatory purpose. Please refer to
As is well known to one of ordinary skill in the art, in NTSC systems, which are adopted as an example in the following description of the embodiment of the invention, the chrominance subcarrier phase rotates by 180 degrees between successive frames. This rotation causes luminance information to be misinterpreted as chrominance information, which oscillates between two complementary colors such as red and green; that is, the luminance appears to be spectral energy which oscillates between two colors represented by chrominance information 180 degrees out of phase with each other. Similar 180-degree phase rotation between successive frames can also be observed when examining the cross luminance phenomenon, i.e., the corruption of the luminance spectrum by the chrominance information.
Therefore, by averaging the chrominance information in two successive frames the out-of-phase cross color information cancels thereby allowing chrominance information to be obtained which is substantially free of cross color. Likewise, the cross luminance information can also be cancelled by similar averaging operation. However, this technique works most effective when the image is stationary, or still. As a result, a well-designed motion detection algorithm (or a stationary judgment algorithm) may serve to enhance the cross color suppression and/or cross luminance suppression effect, as well as the resultant display quality of the outcome of the processing, since improper cross color and/or cross luminance suppressing operation based on a poor motion detection algorithm degrades the display quality drastically.
As such, in an embodiment of the present invention, a motion detection algorithm adopted by the motion detector 102 is to be provided as in the following descriptions. Take the sequentially incoming image fields shown in
dY1=|YEo−Y″Eo|<Thl—Y1 (1)
dU1=|UEo−U″Eo|<Thl—U1 (2)
dV1=|VEo−V″Eo|<Thl—V1 (3)
wherein Y, U, and V represent the one luminance information and two chrominance information of the corresponding pixel data, respectively, and Thl_Y, Thl_U, and Thl_V are threshold values, whose amounts should be determined according to actual applications. In this embodiment, only when the values of the above three functions (1), (2), and (3) are all true, is the first condition asserted to be true.
Please note that, although in this embodiment differences in pixel data between two frames are adopted to indicate the degree of similarity between two frames, other known way of indicating similarity may also be utilized. Please also note that, although in this embodiment only the pixel information of the current pixel (i.e., Eo) is adopted for similarity determination, more pixels may be incorporated into such determination. For example, the function of (1) may also be substituted by the following function:
That is, in addition to the current pixel Eo, the surrounding eight pixels in the same field are also incorporated into the similarity determination. Of course, the number and position of pixels incorporated may be altered, and similar substitutions may also be asserted to functions (2) and/or (3).
Moreover, in determining whether the image is stationary or not for the current pixel, here, Eo, it may also be instructive to check for the similarity between a corresponding pixel (Ee) in the complementary field, here, the even field, of the same frame, and a pixel (Ee″) two frames prior thereto. This is because pixels in the complementary field of the same frame contributes half of the frame, and therefore should be indicative in determining if an image being stationary or not. Likewise, the above-mentioned adoption of Y, U, and/or V pixel information in determining similarity, and the adoption of multiple pixels around the current pixel, may also be applied to such checking for similarity between the complememtary field and its predecessor two frames ahead.
When deriving a result for the first condition, it may also be meaningful to optionally check for the similarity between the preceding frame 1 at time T-1 of the current frame 0, and the frame 3 at time T-3, which is two frames prior to the frame 1. As an example, the following three functions can be utilized:
dY2=|Y′Eo−Y′″Eo|<Thl—Y2 (4)
dU2=|U′Eo−U′″Eo|<Thl—U2 (5)
dV2=|V′Eo−V′″Eo|<Thl—V2 (6)
This is because in determining whether an image at time T is stationary or not, it might also be indicative to check if the image one frame ahead (i.e., at time T-1) is also stationary, for stillness of an image is construed in a consecutive context. In an embodiment where these functions are incorporated in determining the outcome of the first condition, only when the values of the functions (1), (2), (3), (4), (5), and (6) are all true, is the first condition asserted to be true.
Of course as can be appreciated by those of ordinary skill in the art, the above-mentioned adoption of multiple pixels around the current pixel, and the adoption of pixel information of the complementary fields may also be applied to such checking for stillness between the frame 1 and its predecessor frame 3, which is two frames ahead.
In addition to the first condition, a second condition, wherein the similarity between a frame that is one frame ahead of the current frame (i.e., the frame 1 at time T-1 ), and an adjacent frame thereof (for example, the frame 2 at time T-2), is further considered in determining the stillness of the image for the current pixel. The checking for similarity between two adjacent frames, though may not as significant due to the 180 degree out-of-phase characteristic in cross color of the NTSC systems, can still be of meaning in determining stillness, considering the consecutive nature of the stillness in image. In this embodiment, the second condition may be implemented by observing the values of the following functions:
dY3|Y′Eo−Y″Eo|<Thl—Y3 (7)
dU3=U′Eo−U″Eo|<Thl—U3 (8)
dV3=V′Eo−V″Eo|<Thl—V3 (9)
Similarly in this embodiment, only when the values of the above three functions (7), (8), and (9) are all true, is the second condition asserted to be true.
In addition to the three functions (7), (8), and (9), further observations can be incorporated in determining the outcome of the second condition, such as the similarity between the frame which is two frames ahead of the current frame (i.e., the frame 2 at time T-2) and an adjacent frame thereof (for example, the frame 3 at time T-3). The following functions may serve as one such example:
dY4=|Y″Eo−Y′″Eo|<Thl—Y4 (10)
dU4=|U″Eo−U′″Eo|<Thl—U4 (11)
dV4=|V″Eo−V′″Eo|<Thl—V4 (12)
And in such an embodiment, only when the values of the functions (7), (8), (9), (10), (11), and (12) are all true, is the second condition asserted to be true.
Please note that, when the required pixel information is available, the above-mentioned functions (7), (8), (9) may also be adapted to check the similarity between the frame 1 and the other adjacent frame thereof, which is the current frame 0, and the above-mentioned function (10), (11), (12) may also be adapted to check the similarity between the frame 2 and the other adjacent frame thereof, which is the frame 1. Also as can be appreciated by those of ordinary skill in the art, the above-mentioned adoption of multiple pixels around the current pixel, and the adoption of pixel information of the complementary fields may also be applied to such checking for similarity between two adjacent frames.
In addition to the first and the second conditions, a third condition, which is termed as the “high-frequency stillness” condition, is further examined in determining the stillness of the image for the current pixel. The third condition checks for the consecutive stationary situation of frames 0, 1, and 2 respectively at time T, T-1, and T-2 by utilizing, as an example, the following operations. First, the following operators are so defined:
dNext—Y=Y′Eo−Y″Eo
dPre—Y=Y′Eo−Y″Eo
dNext—U=U′Eo−UEo
dPre—U=U′Eo−U″Eo
dNext—V=V′Eo−V″Eo
dPre—V=V′Eo−V″Eo
Then, the following condition pertaining the operator dNext_Y is checked to find out the value of an additional operator Next_Y:
Similar conditions respectively pertaining the operators dPre_Y, dNext_U, dPre_U, dNext_V, and dPre_V are also checked to find out corresponding operators Pre_Y, Next_U, Pre_U, Next_V, and Pre_V. And lastly, the following condition pertaining the operators Next_Y and Pre_Y is further checked to find out the value of yet another operator NextPre_Y:
Similar conditions respectively pertaining to the operators Next_U and Pre_U, Next_V and Pre_V, are also checked to find out corresponding operators NextPre_U, and NextPre_V. Here if the value of the operator NextPre_Y is true, high-frequency alternation in Y domain is deemed existing, and high-frequency alternation phenomenon is not desirable for an image regarded as stationary. Therefore, in this embodiment, only when the values of the operators NextPre_Y, NextPre_U, and NextPre_V are all false, is the third condition asserted to be true. Please note that, to a person of ordinary skill in the art, it is understood that such a high-frequency stillness condition may also be expanded to incorporate laterally adjacent pixels (Do and/or Fo) and/or vertically adjacent pixels (Bo and/or Ho) of the current pixel Eo.
After all these operations are performed, the motion detector 102 determines whether the image is stationary or not for the current pixel. In this embodiment, the image is deemed stationary for the current pixel only when the first, the second, and the third conditions are all asserted to be true.
Please note that the hardware requirement, particularly the memory requirement, of the motion detector 102 to perform the above-mentioned condition check varies according to the complexity of conditions actually adopted, from 8 field buffers to 4 field buffers, as will be appreciated by those of ordinary skill in the art. Also note that here, the estimated field buffer requirement need not include currently incoming image field, taking the advantage of a pixel-by-pixel operation.
After the motion detector 102 decides whether the image is stationary or not for the current pixel, the motion control signal is then passed to the processor 104 to inform the processor 104 of the determination of the motion detector 102. If the image is deemed stationary for the current pixel, the cross color suppression and/or the cross luminance suppression operation is launched by, in this embodiment, averaging the pixel information across two consecutive image frames (for example, (YEo+YEo′)/2, (UEo+UEo′)/2, and (VEo+VEo′)/2), or other suppression methods known to a skilled artisan. If the image is deemed not stationary (i.e., with motion), in this embodiment the current pixel is output as received.
Please refer to
Although the detailed description of the embodiments of the invention has been focused on the application in NTSC systems, the present invention may also be adapted to other display systems, such as the PAL systems. One point worth noting is that for PAL systems, the chrominance subcarrier phase rotates by 90 degrees between successive frames, as is the case for the luminance subcarrier. Therefore, the misinterpretation of luminance information as chrominance information rotates in phase by 90 degrees for each incoming frame. With this in mind, a skilled artisan should be able to substitute the claimed invention into a PAL system, and gain from similar improved display quality.
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
This continuation application claims the benefit of co-pending U.S. patent application Ser. No. 10/710,072, filed on Jun. 16, 2004 (now U.S. Pat. No. 7,280,159) and incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4530004 | Achiha et al. | Jul 1985 | A |
4670773 | Silverberg | Jun 1987 | A |
4706112 | Faroudja | Nov 1987 | A |
4723157 | Wendland et al. | Feb 1988 | A |
4731660 | Faroudja | Mar 1988 | A |
4831463 | Faroudja | May 1989 | A |
4837611 | Faroudja | Jun 1989 | A |
4893176 | Faroudja | Jan 1990 | A |
4916526 | Faroudja | Apr 1990 | A |
4918515 | Faroudja | Apr 1990 | A |
4943849 | Faroudja | Jul 1990 | A |
4967271 | Campbell et al. | Oct 1990 | A |
4982280 | Lyon | Jan 1991 | A |
4984068 | Sugiyama et al. | Jan 1991 | A |
5012329 | Lang et al. | Apr 1991 | A |
5019895 | Yamamoto et al. | May 1991 | A |
5023713 | Nishigori | Jun 1991 | A |
5027194 | Scheffler | Jun 1991 | A |
5051826 | Ishii | Sep 1991 | A |
5055920 | Illetschko et al. | Oct 1991 | A |
5063438 | Faroudja | Nov 1991 | A |
5146318 | Ishizuka et al. | Sep 1992 | A |
5249037 | Sugiyama et al. | Sep 1993 | A |
5305095 | Song | Apr 1994 | A |
5305120 | Faroudja | Apr 1994 | A |
5428398 | Faroudja | Jun 1995 | A |
5448305 | Hagino | Sep 1995 | A |
5457501 | Hong | Oct 1995 | A |
5475438 | Bretl | Dec 1995 | A |
5483294 | Kays | Jan 1996 | A |
5502509 | Kurashita et al. | Mar 1996 | A |
5689301 | Christopher | Nov 1997 | A |
6034733 | Balram | Mar 2000 | A |
6052312 | Ishii | Apr 2000 | A |
6108041 | Faroudja | Aug 2000 | A |
6133957 | Campbell | Oct 2000 | A |
6317165 | Balram | Nov 2001 | B1 |
6377308 | Cahill | Apr 2002 | B1 |
6580463 | Swartz | Jun 2003 | B2 |
6891571 | Shin | May 2005 | B2 |
6956620 | Na | Oct 2005 | B2 |
6987884 | Kondo et al. | Jan 2006 | B2 |
6995804 | Kwon et al. | Feb 2006 | B2 |
7061548 | Piepers | Jun 2006 | B2 |
7084923 | Brown | Aug 2006 | B2 |
7098957 | Kim et al. | Aug 2006 | B2 |
7154556 | Wang | Dec 2006 | B1 |
7227587 | MacInnis et al. | Jun 2007 | B2 |
7271850 | Chao | Sep 2007 | B2 |
7280159 | Chao | Oct 2007 | B2 |
7423691 | Orlick | Sep 2008 | B2 |
7432987 | Shan et al. | Oct 2008 | B2 |
20020093587 | Michel | Jul 2002 | A1 |
20030112369 | Yoo | Jun 2003 | A1 |
20040017507 | Clayton | Jan 2004 | A1 |
20040114048 | Jung | Jun 2004 | A1 |
20050018086 | Lee | Jan 2005 | A1 |
20050134745 | Bacche et al. | Jun 2005 | A1 |
20050168650 | Walls et al. | Aug 2005 | A1 |
20050270415 | Jiang | Dec 2005 | A1 |
20060187344 | Corral | Aug 2006 | A1 |
20060203125 | Sayre | Sep 2006 | A1 |
20060228022 | Chao | Oct 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20070258013 A1 | Nov 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10710072 | Jun 2004 | US |
Child | 11760792 | US |