This non-provisional application claims priority under 35 U.S.C. §119(a) on Patent Application No(s). 097151811 filed in Taiwan, R.O.C. on Dec. 31, 2008, the entire contents of which are hereby incorporated by reference.
1. Field of Invention
The present invention relates to a noise elimination method of video signals, and more particularly to a noise elimination method of an image sequence which is integrated with 3D noise filtering during color separation by using color filter array interpolation (CFAi).
2. Related Art
Generally, noises occurring in an image include impulse noises, salt and pepper noises, and Gaussian noises, among which the Gaussian noises mostly conform to the noises generated by an image sensor. Mean filters, median filters, and Gaussian filters are common in 3D noise filters. The mean filter and the median filter are both linear filters, which mainly employ a filtering method of directly adding pixel values of adjacent images or adding the pixel values after being multiplied by a weighted value and then obtaining a mean value thereof to replace an intermediate pixel value. The Gaussian filter applies the normal distribution characteristic of Gaussion funtion, and selects an appropriate smoothing parameter (σ) to control the extent of eliminating the noises. In addition, methods of eliminating noises by using Fourier transform and wavelet transform are also available.
The noises randomly occur in an image sequence, existing 3D noise filtering technologies are all applied to spatial space such as full RGB or YCC color space, and most 3D filtering processes applied to video signals (or image sequences) are performed after color separation through color filter array interpolation (CFAi). However, the noises may affect results of the color separation through CFAi and the subsequent processes, and meanwhile the processing results of the above processes may also affect the correctness of motion estimation. Moreover, artifacts generated by the noises in the color separation through CFAi may also affect the correctness of motion estimation.
Accordingly, the present invention is a noise elimination method of an image sequence, which integrates color separation through color filter array interpolation (CFAi) with 3D noise filtering, so as to solve the above problems in the prior art.
A preferred embodiment of the method provided in the present invention comprises the following steps.
In Step A, a raw image data captured by an image capturing element is acquired.
In Step B, an interframe luma processing step is performed, in which the raw image data of a current frame is defined as a base image, the raw image data of a previous frame is defined as a reference image, and the base image and the reference image are respectively converted into a full luma base image and a full luma reference image represented by gray-scaled luminance values.
In Step C, a full RGB generation step is preformed, in which an interpolation process is performed on the base image and the reference image by using the full luma base image and the full luma reference image generated in the above step, so as to generate a noise-free full RGB image.
The present invention also provides an adaptive noise elimination method of an image sequence, which comprises an adaptive average filtering step. In the filtering step, by using information about inter-image and intra-image such as smoothness and similarity, an appropriate filtering manner is selected when processing the base image and the reference image, thereby obtaining a desired image filtering result.
The present invention further provides a noise elimination method of video signals through motion compensation, in which a global motion estimation and an image registration are performed to compensate the base image and the reference image during processing, so as to obtain a desired image filtering result.
The present invention will become more fully understood from the detailed description given herein below for illustration only, and thus are not limitative of the present invention, and wherein:
Referring to
The image capturing element 10 is an image sensor 12 having a color filter array (CFA) 11, and is used to capture an image of an external object and convert the image into an electrical signal having a raw image data, in which the electrical signal is a digital image signal. Then, a consecutive image sequence is generated by continuous shooting, which is the so-called video signal.
The operation processor 20 executes the steps of the method according to the present invention through programs, so as to eliminate noises in the above raw image data.
The memory unit 30 is used to store relavent data and operation procedures during the image processing.
Particularly, the hardware system in
Referring to
In Step A, a raw image data captured by an image capturing element is acquired, so as to obtain a raw image data of a current frame (current image2) and a raw image data of a previous frame (previous image1), the raw image data of the current frame (current image2) is defined as a base image (raw image2), and the raw image data of the previous frame (previous image1) is defined as a reference image (raw image1).
In Step B, an interframe luma processing step is performed, so as to convert the base image (raw image2) and the reference image (raw image1) into a full luma base image (full luma image2) and a full luma reference image (full luma image1) represented by gray-scaled luminance values through a luma channel generation process.
In Step C, a full RGB generation step is performed, in which an interpolation process is performed on the base image and the reference image by using the full luma base image (full luma image2) and the full luma reference image (full luma image1) generated in the above step, so as to generate a noise-free full RGB image.
Referring to
In Step B-1, the base image (raw image2) and the reference image (raw image1) are converted into luminance signals through a luma channel generation process, for example, by using a mask, so as to obtain gray-scaled full luma images, and respectively obtain a full luma base image (full luma image2) and a full luma reference image (full luma image1).
In Step B-2, an image registration is performed on the full luma base image (full luma image2) and the full luma reference image (full luma image1), so as to generate a registered full luma reference image (registered full luma image1).
In Step B-3, an adaptive frame average filtering process is performed on the registered full luma reference image (registered full luma image1) and the full luma base image (full luma image2) generated in the above step, so as to generate a filtered full luma base image with noises eliminated (filtered full luma image2).
A preferred embodiment of the adaptive frame average filtering process according to the present invention is shown in
In Step 4.1, a comparison block is respectively selected from an input image1 and an input image2. For example, the registered full luma reference image (registered full luma image1) and the full luma base image (full luma image2) in Step B-3 are the input image1 and the input image2 in this step.
In Step 4.2, the two selected comparison blocks are compared on smoothness, for example, a comparison analysis on image grad is performed, so as to determine whether the input image1 and the input image2 both have smooth image contents. In other words, if the input image1 and the input image2 both have smooth image contents, Step 4.4 is performed, and if the input image1 and the input image2 have high contrast or detailed image contents, Step 4.3 is performed.
In Step 4.3, an interframe similarity evaluation is performed on the two selected comparison blocks, particularly, by means of operating a sum of absolute difference (SAD), so as to determine whether the input image1 and the input image2 have a high similarity. If the input image1 and the input image2 have a high similarity, Step 4.4 is performed; otherwise, the content of the input image2 is reserved to directly serve as an averaged output image.
In Step 4.4, an average filtering process is performed on pixels of the input image1 and the input image2, so as to generate an averaged output image.
Another preferred embodiment of the interframe luma processing in Step B further comprises a step of motion compensation. As shown in
In the present invention, the global shift estimation is adopted to reduce the pixel-wise searching range of a motion estimation module, and the full luma reference image (full luma image1) is slightly shifted before the image registration, thus reducing the processing time of the image registration.
Referring to
In Step C-1, a chroma image is obtained. Components of three chroma, i.e., red (r), green (g), and blue (b) are obtained respectively from the base image (raw image2) (the raw image data of the current frame (current image2)) and the reference image (raw image1) (i.e., the raw image data of the previous frame (previous image1)) by using the chroma channel technology. In other words, an difference operation is performed on the registered full luma reference image (registered full luma image1) obtained in Step B-2 and the filtered full luma base image (filtered full luma image2) generated in Step B-3 with the reference image (raw image1) and the base image (raw image2), so as to obtain the chroma component information (r, g, b) of the registered chroma reference image (registered chroma image1) and the chroma base image (chroma image2). The difference operation equations comprise Equation 1.1 and Equation 1.2 as follows.
chroma image1=CFA image1(r, g, b)−registered full luma image1(Y) (Equation 1.1)
chroma image2=CFA image2(r, g, b)−registered full luma image1(Y) (Equation 1.2)
CFA (r, g, b) represents the raw image data of the r, g, b signals captured by the image sensor having the CFA.
In Step C-2, an adaptive frame average filtering process is performed. The registered chroma reference image (registered chroma image1) and the chroma base image (chroma image2) generated in Step C-1 respectively serve as the current and the previous frame images. Then, the chroma component data (r, g, b) of an average chroma base image (chroma image2′) is obtained through the adaptive frame average filtering process as shown in
In Step C-3, a full chroma image of the base image is generated, in which the filtered full luma base image (filtered full luma image2) generated in Step B-3 and the average chroma base image (chroma image2′) generated in Step C-2 are integrated into a full chroma image of the base image (full chroma image2).
In Step C-4, a full RGB generation step is performed, in which the filtered full luma base image (filtered full luma image2) generated in Step B-3 and the full chroma base image (chroma image2) generated in Step C-3 are integrated into a full RGB image, and the employed operation equation is Equation 2 as follows.
Full RGB Image=Filtered Full Luma Image2(Y)+full chroma image2(r, g, b) (Equation 2)
| Number | Date | Country | Kind |
|---|---|---|---|
| 097151811 | Dec 2008 | TW | national |