1. Field of the Invention
The present invention relates generally to the processing of video images and, more particularly, to techniques for deinterlacing video images.
2. Description of the Related Art
All major television standards use a raster scanning technique known as “interlacing” or “interlace scanning.” Interlace scanning draws horizontal scan lines from the top of the screen to the bottom of the screen in two passes. Each pass is known as a field. In the National Television System Committee (NTSC) standard used in North America, each field takes approximately 1/60th of a second to draw.
Interlace scanning depends of the ability of the cathode ray tube (CRT) phosphors to retain an image for a few milliseconds, in effect acting like a “memory” to retain the previous field while the newer interleaved field is being scanned. Interlace scanning provides a benefit in television systems by doubling the vertical resolution of the system without increasing broadcast bandwidth.
This temporal displacement typically does not create a problem on conventional television displays, primarily because the image of the “older” field quickly fades in intensity as the light output of the phosphors decays. A secondary reason is that the spatial displacement in the images caused by motion results in a fine detail that television displays resolve well. For these reasons, interlace scanning of motion pictures works acceptably well on conventional television displays.
If a motion picture formatted for an interlaced monitor device as in
In this example, each of the video fields 20 has a spatial resolution of 720 horizontal by 240 vertical pixels. Each field contains half the vertical resolution of a complete video image. The first series of video fields 20 are input to a deinterlace processor 22, which converts the 720 by 240 interlaced format to a second series of video fields 24. In this example, each of the second series of video fields 24 may have 720 by 480 pixels where the fields are displayed at 60 frames per second.
The resulting frame 54 will have all the lines 50 of the original video field 48. The remaining lines 56 are created by interpolation of lines 50. The resultant image will not have motion artifacts because all the lines in the image will be created from lines 50 that are time correlated. This alternative method 46 of deinterlacing does not produce motion artifacts, but the vertical resolution of the image is reduced by half.
In summary, deinterlacing by combining two fields into a single frame preserves the vertical resolution in an image, but may result in motion artifacts. Deinterlacing by interpolation of a single field to produce a frame eliminates the motion artifacts, but discards half the vertical resolution of the original image. In view of the forgoing, it is desirable to have a method of deinterlacing that provides for preservation of the full resolution of an image, while at the same time eliminating motion artifacts.
The present invention fills these needs by providing a method and apparatus for deinterlacing a video input stream while reducing motion artifacts and maintaining vertical resolution in the deinterlaced video stream. It should be appreciated that the present invention can be implemented in numerous ways, including as a process, an apparatus, a system, a device or a method. Several inventive embodiments of the present invention are described below.
In one embodiment of the present invention, a digital image processor is provided. The digital image processor includes a deinterlacing processor that is implemented upon a digital processing unit. The deinterlacing processor is coupled to an input operable to receive an interlaced video stream, a digital memory for storing portions of the interlaced video signal, and an output operable to transmit a deinterlaced video stream. The deinterlacing processor is operable to perform frequency analysis upon the received interlaced video stream in order to generate the deinterlaced video stream having reduced motion artifacts.
In another embodiment of the present invention, a method for deinterlacing an interlaced video stream is provided. The method includes receiving a video frame including a number of pixels from an input of the interlaced video stream. The video frame is analyzed for frequency information inherent to the video frame in order to detect motion artifacts. A number of motion artifact detection values is determined for each of the pixels in the video frame. An ultimate detection value is then determined for each motion artifact detection values. The ultimate detection value corresponding to each pixel is mixed with a set of spatially corresponding pixels to generate an output pixel.
In yet another embodiment of the present invention, a method for deinterlacing an interlaced video stream is provided. The method includes receiving a first video frame including a number of pixels from an input of the interlaced video stream. The first video frame is analyzed for frequency information inherent to the first video frame in order to detect motion artifacts. A number of motion artifact detection values is determined for each of the pixels in the first video frame from which. An ultimate detection value is then determined for each motion artifact detection value. A second video frame, which includes pixels that spatially correspond to pixels of the first video frame, is determined from the input of the interlaced video stream. The ultimate detection value corresponding to each pixel is then mixed with a set of spatially corresponding pixels in the second video frame to generate an output pixel.
An advantage of the present invention is that it allows for detection and reduction of motion artifacts in video images. By reducing the effect of the motion artifact, the video image becomes much clearer and appears to be free of defects. Further, the deinterlacing is accomplished without loss of vertical resolution.
Other aspects and advantages of the invention will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the invention.
The present invention will be readily understood by the following detailed description in conjunction with the accompanying drawings. To facilitate this description, like reference numerals designate like structural elements.
A method and apparatus for a video deinterlace processing is disclosed. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be understood, however, to one skilled in the art, that the present invention may be practiced without some or all of these specific details. In other instances, well known process operations have not been described in detail in order not to unnecessarily obscure the present invention.
The array 58 is positioned so that a set of even numbered rows 60 contain pixels from the most recent or “current” field of the original source, and a set of odd numbered rows 62 contain pixels from the previous field. The array 58 is then stepped across the combined frame 36 from left to right horizontally. Each step causes the pixels in each of columns C1, C2, and C3 and C4 to shift to the column to its immediate left. The pixels in column C0 shift out of the array 58, and a new column of pixels shifts into column C4.
After the array 58 has been stepped across all the horizontal positions, it is stepped down vertically by two pixels and returned to the left side of the field. Therefore, even numbered rows 60 contain pixels from the most recent field and odd numbered lines 62 contain pixels from the previous field. The process then repeats itself as array 58 is then stepped across the combined frame 36 again from left to right horizontally.
The weighted average is then used in an act 72 to compute an ultimate detection value (UDV). The weighting factors may include variables. One weighting example is in the following Equation 1:
UDV=(fd0+(2*fd1)+(8*fd2)+(2*fd3)+fd4)/14
The weighting causes frequency detection values closest to the center of array 58 to have the greatest influence on UDV. In this way, using five horizontally adjacent frequency detection values results in a low pass filtering act providing smoother transitions between areas within the image 36 where motion artifacts do and do not exist.
UDV computed in act 72 is used to control an act 74, which mixes a pixel with spatially corresponding pixels from the center of array 58 to generate an output pixel. Act 74 preferably implements the following Equation 2:
pixelout=(UDV*(pR2C2+pR4C2)/2)+((1−UDV)*pR3C2)
where pixelout is the new the output pixel of the deinterlacing act at position pR2C2 is a pixel in the array 58 at location Row 2, Column 2, pR4C2 is a pixel in the array 58 at location Row 4, Column 2, and pR3C2 is a pixel in the array 58 at location Row 3, Column 2.
The result of mixing act 74 is that the new value of pixel pR3C2 of the array 58 depends on UDV. If no motion is detected by the calculation of UDV, then the pixel at pR3C2 will be the unmodified value of the pixel at that position in the previous field. If a large UDV, i.e., a value of 1 results, then a strong motion artifact has been detected, and the value of pR3C2 is computed by averaging the values of pR2C3 and pR4C3 of the array 58. The averaged result will not show motion artifacts because is created from values of the most recent field that are time correlated with the most recent field. Detection values that are between 0 and 1 will cause the pixel at pR3C2 to be a mix of pR3C2 and the average of pR2C3 and pR4C3.
where fd is the frequency detection value for one column of array 58, R is a line index corresponding to the R0 . . . R6 of array 58 and has the units “line,” and Y(R) is the set of vertically adjacent samples 86.
The expression cos (2πR*0.5 cycles/line) simplifies to 1 for R=0, 2, 4, and 6 and −1 for R=1, 3, and 5. If 1 and −1 are substituted for R0 . . . R6, the frequency detection equation becomes: fd=(Y6/2+Y4+Y2+Y0/2)−(Y5+Y3+Y1). Note that Y6 and Y0 are divided by 2 because the integration is over the limits 0 to 6. The final fd is the absolute value: fd=Abs(fd). The method 64 of
tdf=(ptfd−LTH)/UTH
where tdf is the thresholded frequency detection value, pthfd is the pre-thresholded frequency detection value (the output of act 66), LTH is the lower threshold value and UTH is the upper threshold value. If tfd>1.0, then tfd=1.0. Otherwise, if tfd<0 then tfd=0.
While this invention has been described in terms of several preferred embodiments, it will be appreciated that those skilled in the art upon reading the preceding specifications and studying the drawings will realize various alterations, additions, permutations and equivalents thereof. It is therefore intended that the present invention include all such alterations, additions, permutations, and equivalents as fall within the true spirit and scope of the invention.
It will therefore be appreciated that the present invention provides a method and apparatus for deinterlacing an interlaced video stream while maintaining the original resolution of the video stream while reducing edge artifacts in moving objects in an output video image. This is accomplished by employing two-field interlacing where the image is relatively static, and employing one-field line doubling where the image is rapidly changing. The combination of these techniques provides a low-artifact, high-resolution deinterlaced image.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention. Furthermore, certain terminology has been used for the purposes of descriptive clarity, and not to limit the present invention. The embodiments and preferred features described above should be considered exemplary, with the invention being defined by the appended claims.
This application claims the benefits of U.S. Patent Provisional Application No. 60/096,144 filed on Aug. 11, 1998, and is a continuation of and also claims the benefit of U.S. patent application Ser. No. 09/372,713 filed on Aug. 11, 1999 and issued as U.S. Pat. No. 6,489,998 on Dec. 3, 2002, and is related to U.S. patent application Ser. No. 09/167,527 filed on Oct. 6, 1998 and issued as U.S. Pat. No. 6,380,978 on Apr. 30, 2002, all three of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4002827 | Nevin et al. | Jan 1977 | A |
4689675 | Tchorbajian et al. | Aug 1987 | A |
4959715 | Prodan | Sep 1990 | A |
5347314 | Faroudja et al. | Sep 1994 | A |
5444493 | Boie | Aug 1995 | A |
5543858 | Wischermann | Aug 1996 | A |
5600731 | Sezan et al. | Feb 1997 | A |
5784115 | Bozdagi | Jul 1998 | A |
5856930 | Hosono | Jan 1999 | A |
6014182 | Swartz | Jan 2000 | A |
6034733 | Bairam et al. | Mar 2000 | A |
6055018 | Swan | Apr 2000 | A |
6104755 | Ohara | Aug 2000 | A |
6166772 | Voltz et al. | Dec 2000 | A |
6222589 | Faroudja et al. | Apr 2001 | B1 |
6266092 | Wang et al. | Jul 2001 | B1 |
6269484 | Simsic et al. | Jul 2001 | B1 |
6295041 | Leung et al. | Sep 2001 | B1 |
6298144 | Pucker, II et al. | Oct 2001 | B1 |
6489998 | Thompson et al. | Dec 2002 | B1 |
6504577 | Voltz et al. | Jan 2003 | B1 |
6545719 | Topper | Apr 2003 | B1 |
6577345 | Lim et al. | Jun 2003 | B1 |
6847405 | Hsu et al. | Jan 2005 | B1 |
6867814 | Adams et al. | Mar 2005 | B1 |
6909469 | Adams | Jun 2005 | B1 |
20010016009 | Hurst | Aug 2001 | A1 |
20020109790 | Mackinnon | Aug 2002 | A1 |
Number | Date | Country | |
---|---|---|---|
20040056978 A1 | Mar 2004 | US |
Number | Date | Country | |
---|---|---|---|
60096144 | Aug 1998 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09372713 | Aug 1999 | US |
Child | 10251642 | US |