1. Field of the Invention
The invention relates in general to a de-interlacing device and a method therefor, and more particularly to an edge-based de-interlacing device adapted to motion frames, and a method therefor.
2. Description of the Related Art
A film is generally shot and displayed in an interlaced manner. Taking the television as an example, the displaying of one frame is completed by firstly displaying an odd field thereof and then an even field thereof. The odd field is composed of the odd-numbered lines of the frame, and the even field is composed of the even-numbered lines of the frame.
The typical television has a refresh rate of 30 Hz. That is, 30 frames are displayed in one second. Because the odd and even fields of the frame are alternately displayed, 60 fields are displayed in one second.
The demand on the image quality is getting higher and higher, and the quality of a non-interlaced (i.e., progressive) image is better than that of the interlaced image. Thus, advanced video displaying devices, for example, the high definition television (HDTV), adopt the progressive displaying function.
If the interlaced images are to be displayed in the non-interlaced manner, the interlaced frame has to be de-interlaced and then displayed once in a complete frame.
However, the de-interlacing method mentioned above tends to cause the problem of image flicker. Because the two fields of the interlaced image are shot at different time instants, the image has to be further processed so as to enhance the image quality.
It is therefore an object of the invention to provide a de-interlacing device and a method therefor.
The invention achieves the above-identified object by providing a de-interlacing device being used for an image. The image includes at least one field, which includes a plurality of pixels arranged in an M*N matrix. The de-interlacing device is for determining a brightness value of the current pixel and includes a motion detector for determining whether or not the current pixel is motional, an edge detector for detecting an edge type of the current pixel if the current pixel is motional, and an edge-based de-interlacer, which comprises a reference pixel determiner for determining a plurality of edge reference pixels having the strongest correlation with the current pixel according to the edge type, and a brightness value determiner for determining the brightness value of the current pixel according to the edge reference pixels.
The invention also achieves the above-identified object by providing a de-interlacing method. First, it is determined whether or not the current pixel is motional. If the current pixel is motional, the edge type of the current pixel is detected. The current pixel can be classified as one of the edge types, that is, a smooth region, a vertical edge, a horizontal edge, and an inclined edge. Next, a plurality of edge reference pixels having the strongest correlation with the current pixel is determined according to the edge type. Finally, the brightness value of the current pixel is determined according to these edge reference pixels.
Other objects, features, and advantages of the invention will become apparent from the following detailed description of the preferred but non-limiting embodiments. The following description is made with reference to the accompanying drawings.
The motion detector 110 receives interlaced image data, and determines whether or not the interlaced image data is motional or motionless. If the interlaced image data is motionless, the interlaced image data is de-interlaced by the temporal mean filter 120; or otherwise processed by the edge detector 130. The motion detector 110 of this embodiment is to determine that each pixel belongs to the motion portion or motionless portion of the image. If the pixel belongs to the motion portion, it is passed to the edge detector 130; or otherwise to the temporal mean filter 120.
The temporal mean filter 120 de-interlaces the motionless pixel and then outputs the de-interlaced pixel. The manner for de-interlacing the motionless image is, for example, a conventional temporal average method and detailed description thereof will be omitted.
The edge detector 130 detects whether or not the motion image pixel is at the edge of the image, and outputs a detection result to the edge-based de-interlacer 140. The edge-based de-interlacer 140 utilizes a corresponding de-interlacing method to obtain a better image with a higher speed according to the detection result of the edge detector 130.
The function and operation of each element of the de-interlacing device 100 have been briefly described in the above, and will be described in detail in the following.
Motion Detector 110
The conventional motion image detection is performed according to the BPPD (Brightness Profile Pattern Difference) method. The BPPD method utilizes the first-order difference and the second-order difference of the brightness value to determine the motion image. The BPPD method enhances the moving sensitivity in the left and right directions, but ignores the movements in the up and down directions. The conventional BPPD method will be described in the following. The parameters required by the BPPD method includes a brightness value B, a brightness difference Bd, a brightness profile value D and a brightness profile value P.
Bdan=|Ban−Ban−2| (2-1).
Next, the brightness profile values D are defined as:
Dabn=Ban−Bbn (2-2), and
Dacn=Ban−Bcn (2-3).
According to the previous discussions, if the value of λ is 2, the sensitivity with respect to the noise can be reduced. Next, the brightness profile value P is defined as:
Pan=|Dabn−Dabn−2|+|Dacn−Dacn−2| (2-4).
The brightness profile value P is for measuring the difference gradients between one pixel and its adjacent pixels in different fields.
In the conventional BPPD method, the following six parameter values are used to determine whether the image is motional:
Bdan=|Ban−Ban−2| (2-5),
Bddn=|Bdn−Bdn−2| (2-6),
Bdgn+1=|Bgn+1−Bgn−1| (2-7),
Pan=|Dabn−Dabn−2|+|Dacn−Dacn−2| (2-8),
Pdn=|Dden−Dden−2|+|Ddfn−Ddfn−2| (2-9), and
Pgn+1=|Dghn−1|+|Dgkn+1−Dgkn−1| (2-10).
The former three parameter values (i.e., the brightness differences, Bdan, Bddn, and Bdgn+1) may be used to detect the motion image. The later three parameter values (i.e., the horizontal brightness profile values, Pan, Pdn, and Pgn+1) is for increasing the moving sensitivity in the left and right directions. However, such a method does not consider the movements in the up and down directions. Thus, the embodiment of the invention further adds a parameter of a vertical brightness profile value to improve the motion image detection:
Pvn=|Dadn−Dadn−2|+|Dben−Dben−2|+|Dcfn−Dcfn−2| (2-11).
In addition, three threshold values Ta, Tb and Tc are defined for inspecting the parameters Bd, P and Pv, respectively. The motion detector 110 of the invention utilizes four fields to derive seven parameters as shown in Equations (2-5) to (2-11). When any one of the seven parameter values is greater than the corresponding threshold value, the current pixel {right arrow over (x)} is classified as motional. According to the experimental result, better results are obtained by setting the threshold values Ta, Tb, and Tc to be between 10 and 15, between 20 and 40, and between 30 and 60, respectively.
The motion detector 110 includes a parameter generator (not shown) and a parameter comparator (not shown). The parameter generator is for determining three motion reference pixels located above a current pixel g, denoted by a, b, c, and three motion reference pixels under the current pixel g, denoted by d, e, f, and calculating the brightness differences, Bdan, Bddn, and Bdgn+1, the horizontal brightness profile values Pan, Pdn, and Pgn+1, and the vertical brightness profile values Pvn according to the motion reference pixels and the current pixel. The parameter comparator is for comparing the brightness differences, Bdan, Bddn and Bdgn+1 to a first threshold value Ta. If any one of the brightness differences Bdan, Bddn, and Bdgn+1 is larger than the first threshold value Ta, the current pixel is motional. The parameter comparator is also for comparing the horizontal brightness profile values Pan, Pdn, and Pgn+1 to a second threshold value Tb. If any one of the horizontal brightness profile values Pan, Pdn, and Pgn+1 is larger than the second threshold value Tb, the current pixel is motional. Furthermore, the parameter comparator is for comparing the vertical brightness profile value Pvn to a third threshold value Tc. If the vertical brightness profile value Pvn is larger than the third threshold value Tc, the current pixel is motional.
Edge Detector 130
The edge detector 130 of the embodiment of the invention utilizes a modified Sobel filter to detect the edge direction. The gradients in the horizontal and vertical directions of the image are detected by using the Sobel filter. According to the obtained gradients, the pixels may be divided into four types of edges: a horizontal edge, a vertical edge, an inclined edge, and a smooth region. The horizontal edge indicates that the pixel is located on an edge in the horizontal direction. The vertical edge indicates that the pixel is located on an edge in the vertical direction. The inclined edge indicates that the pixel is located on an inclined edge. The smooth region indicates that the pixel is located in the smooth region with small image gradient variations. The angle between the inclined edge and the x-axis (horizontal axis) is 30, 45, 60 degrees or other. The Sobel filter includes a 3×3 horizontal filter and a 3×3 vertical filter.
Edge-Based De-Interlacer 140
The edge-based de-interlacer 140 of this embodiment de-interlaces the image according to the calculating result of the edge detector 130. The edge-based de-interlacer 140 includes a reference pixel determiner (not shown) for determining a plurality of reference pixels having the strongest correlation with the current pixel according to the edge type, and a brightness value determiner (not shown) for deriving the brightness value of the current pixel according to the reference pixels. The de-interlacer 140 implements the corresponding de-interlacing method according to whether the current pixel is in the smooth region, the vertical edge, the horizontal edge or the inclined edge.
1. Smooth Region
When the current pixel is in the smooth region, the pixels above and below the current pixel are used as the reference pixels, and the brightness values thereof are averaged. Then, the brightness value Fo of the pixel in the non-interlaced frame are calculated according to the following equations:
Fo({right arrow over (x)},n)=(½)×{f({right arrow over (x)}−{right arrow over (yu)},n)+f({right arrow over (x)}+{right arrow over (yu)},n)} (4-1)
wherein {right arrow over (x)} denotes the coordinates of the pixel, and {right arrow over (yu)} is a unit distance.
2. Vertical Edge
When the current pixel is on the vertical edge, it means that the pixel strongly correlates to its up and down pixels. Thus, the up and down pixels are used as the reference pixels, and the brightness values thereof are averaged to derive the brightness value of the pixel in the non-interlaced frame according to the following equation:
Fo({right arrow over (x)},n)=(½)×{f({right arrow over (x)}−{right arrow over (yu)},n)+f({right arrow over (x)}+{right arrow over (yu)},n)} (4-2).
3. Horizontal Edge
When the current pixel is on the horizontal edge, the current pixel strongly correlates to its left and right pixels. However, the horizontal line on which the current pixel is located has no valid brightness value, so the brightness value of the current pixel cannot be obtained using the brightness values in the same field. The conventional method is implemented by way of motion estimation. However, this method causes great loading in computation. This embodiment only uses the brightness value in the same field to obtain the brightness value of the current pixel, so the computation loading may be reduced with relatively good quality. Because the current pixel on the horizontal edge and its ambient pixel have very strong correlation, the brightness value of the current pixel is obtained by way of interpolation with proper pixels in this embodiment. This embodiment adopts the half-pel method to select the reference pixels and thus obtain the brightness value of the current pixel. First, the following brightness profile value is derived:
D1=|f({right arrow over (x)}−{right arrow over (y1)},n)−f({right arrow over (x)}+{right arrow over (y1)},n)|, and
D2=|f({right arrow over (x)}+{right arrow over (y2)},n)−f({right arrow over (x)}−{right arrow over (y2)},n),
wherein {right arrow over (y1)}=(2,½)t, {right arrow over (y2)}=(2,−½)t, and {right arrow over (x)}−{right arrow over (y1)}, {right arrow over (x)}+{right arrow over (y1)}, {right arrow over (x)}+{right arrow over (y2)}, {right arrow over (x)}−{right arrow over (y2)} are the selected reference pixels but not the actually existing pixels. The brightness value is calculated as follows:
Next, the minimum of D1 and D2 is found:
Dmin=min(D1,D2).
Then, the brightness value of the current pixel may be obtained according to the following equations:
Fo({right arrow over (x)},n)=½×└f({right arrow over (x)}−{right arrow over (y1)},n)+f({right arrow over (x)}+{right arrow over (y1)},n)┘, if Dmin=D1
Fo({right arrow over (x)},n)=½×└f({right arrow over (x)}−{right arrow over (y2)},n)+f({right arrow over (x)}+{right arrow over (y2)},n)┘, if Dmin=D2
4. Inclined Edge
When the current pixel is positioned on the inclined edge, three steps are used to obtain the brightness value of the current pixel. The first and second steps are used to derive the direction information of the current pixel, i.e., the angle information and tilt information of the inclined edge. The first step is to derive the angle information of the inclined edge, i.e., the tilt angles. In this embodiment, the tilt angles of the inclined edge are divided into approximately horizontal, approximate 45 degrees, and approximately vertical. The second step is to derive the tilt information of the inclined edge (i.e., to determine whether its slope is positive or negative).
4a. First Step: Calculate the Angle Information of the Inclined Edge.
In the first step, the angle information of the inclined edge is determined. First, the H_coeff and V_coeff obtained by the edge detector 130 are normalized into H_coeff′ and V_coeff′. The inclined edge on which the current pixel is positioned is determined to be approximately vertical if:
V—coeff′>H—coeff′*Ω,
wherein Ω is a constant ranging from 1.5 to 2 according to the experimental result. Inversely, the inclined edge on which the current pixel is positioned is determined to be approximately horizontal if:
H—coeff′>V—coeff′*Ω;
otherwise, the inclined edge on which the current pixel is positioned is determined to be approximate 45 degrees.
4b. Second Step: Calculate the Tilt Information of the Inclined Edge.
The second step is to derive the tilt information of the inclined edge, that is, to determine whether the slope of the inclined edge is positive or negative.
The tilt direction determining procedure includes two conditions. In the first condition, it is determined whether or not the tilt manners of the pixel α and pixel δ are the same. If so, the tilt direction of the current pixel is the same as that of the pixel α. If not, it is determined whether or not the second condition is satisfied. In the second condition, it is determined whether or not the tilt directions of the pixel β and pixel γ are the same. If so, the tilt direction of the current pixel is the same as that of the pixel β. If not, the tilt direction of the current pixel is set to be indefinite.
4c. Third Step: Calculate the Current Brightness Value According to the Direction Information of the Inclined Edge.
Next, the third step is performed to derive the brightness value of the current pixel according to the direction information of the current pixel, wherein a quarter-pel method is used to make the derived brightness value has the better display quality.
(4c-1) Approximately Horizontal
D1=|BU1−BL5|
D2=|BU1a−BL4c|
Fo=½×(BU1+BL5) if min(D1,D2)=D1
Fo=½×(BU1a+BL4c) if min(D1,D2)=D2
When the current pixel is positioned on the approximately horizontal edge and the slope of the edge is positive, the reference pixels having the highest correlation with the inclined edge on which the current pixel is positioned are pixels U5, L1, quarter-pels U4c and L1a, according to which the brightness value of the current pixel is derived by the following equations:
D3=|BU5−BL1|
D4=|BU4c−BL1a|
Fo=½×(BU5+BL1) if min(D3,D4)=D3
Fo=½×(BU4c+BL1a) if min(D3,D4)=D4
When the current pixel is positioned on the approximately horizontal edge and the slope is indefinite, the brightness value of the current pixel is derived according to the reference pixels U1, U5, L1, L5 and its adjacent quarter-pel U1a, U4c, L1a and L4c by the following equations:
Fo=½×(BU1+BL5) if min(D1,D2,D3,D4)=D1
Fo=½×(BU1a+BL4c) if min(D1,D2,D3,D4)=D2
Fo=½×(BU5+BL1) if min(D1,D2,D3,D4)=D3
Fo=½×(BU4c+BL1a) if min(D1,D2,D3,D4)=D4
(4c-2) Approximate 45 Degrees
(4c-3) Approximately Vertical
The above-mentioned methods for calculating the current pixel on the inclined edge obtain the brightness values of the current pixels by the pixel-based method. However, the implementation of these methods is not restricted to using only five pixels above, U1 to U5, and below, L1 to L5 as reference pixels. In addition to the above-described interpolation with a ratio of a quarter of the distance between pixels and equal scaling, the interpolation can be performed by giving a higher weighting coefficient to the pixel closer to the valid pixel. Further, a block-based approach to obtaining the brightness value of the current pixel can be adopted. For example, the brightness value of the pixel U2 used in the above methods can be replaced by the average of the pixel U2 and its left and right quarter-pels U1b, U1c, U2a, and U2c, or the average of U1c and U2a.
While the invention has been described by way of example and in terms of a preferred embodiment, it is to be understood that the invention is not limited thereto. On the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures.
Number | Date | Country | Kind |
---|---|---|---|
92137852 | Dec 2003 | TW | national |
This application claims the benefit of provisional application Ser. No. 60/505,622, filed Sep. 25, 2003, and the benefit of Taiwan application Serial No. 92137852, filed Dec. 31, 2003, the subject matters of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
60505622 | Sep 2003 | US |