1. Field of the Invention
The disclosed embodiments of the present invention relate to an image system, and more particularly, to a de-noising method and a related image system.
2. Description of the Prior Art
In the real-time digital image process, there are mainly two kinds of de-noising methods. The first kind of de-noising method is performed in a spatial domain, such as Gaussian filtering, median filtering, bilateral filtering, and non-local means (NLM) filtering with good effect. However, these spatial domain de-noising method needs a huge calculation amount to obtain a better effect, and there are side effects of image blur and details loss inevitably.
The second kind of de-noising method is performed in a time domain, which considers a previous frame and a current frame at the same time with an appropriate weighted average in order to achieve the de-noising effect. Compared to the first kind of de-noising method, the top advantage is that it almost does not cause image blur or the detail loss, but the time domain de-noising method may easily increase the ghosting, or make the image not natural. Minimizing the side effects often requires very complex operation.
In order to improve problems of the de-noising methods of time domain and space domain, it is also practical to merge the two kinds of methods, but a de-noising method using the time domain and the space domain at the same time will have three major problems: the first problem is a serious ghosting effect; the second problem is low image resolution; and the third problem is that when the noise is bigger, especially when the image capturing device is in a low light environment, or the image is affected by the lens shading around, the de-noising effect will be reduced.
Thus, a de-noising method with low complexity and high efficiency is required in this field to improve the above problems.
It is therefore one of the objectives of the present invention to provide a de-noising method and a related image system, so as to solve the above-mentioned problem.
In accordance with a first embodiment of the present invention, an exemplary de-noising method is disclosed. The de-noising method comprises: receiving a pixel of a current frame; deriving a de-noising coefficient according to a specific information corresponding to the pixel; and generating an output pixel by allocating a weight of the pixel and a weight of at least one pixel of a previous frame according to the de-noising coefficient, wherein the at least one pixel of the previous frame includes a co-located pixel.
In accordance with a second embodiment of the present invention, an exemplary image system is disclosed. The image system comprises: a lens module, an image and signal processor, and a de-noising unit. The lens module is utilized for capturing an image information. The image and signal processor is coupled to the lens module, and utilized for converting the image information to a frame. The de-noising unit is coupled to the image and signal processor, and utilized for: receiving a pixel of the frame; deriving a de-noising coefficient according to a specific information corresponding to the pixel; and generating an output pixel by allocating a weight of the pixel and a weight of at least one pixel of a previous frame according to the de-noising coefficient, wherein the at least one pixel of the previous frame includes a co-located pixel.
In accordance with a second embodiment of the present invention, an exemplary image system is disclosed. The image system comprises: a lens module, an image and signal processor, a brightness adjusting unit, and a de-noising unit. The lens module is utilized for capturing an image information. The image and signal processor is coupled to the lens module, and utilized for converting the image information to a frame. The brightness adjusting unit is coupled between the image and signal processor and the lens module, and utilized for generating an exposure control signal to the lens module according to an automatic exposure information and generating a frame rate information to a de-noising unit. The de-noising unit is utilized for: receiving a pixel of the frame; deriving a de-noising coefficient according to a specific information corresponding to the pixel; and generating an output pixel by allocating a weight of the pixel and a weight of at least one pixel of a previous frame according to the de-noising coefficient, wherein the at least one pixel of the previous frame includes a co-located pixel, and at least one pixel of the previous frame further comprises at least one pixel surround the co-located pixel.
In accordance with a second embodiment of the present invention, an exemplary image system is disclosed. The image system comprises: a lens module, an image and signal processor, a brightness adjusting unit, and a de-noising unit. The lens module is utilized for capturing an image information. The image and signal processor is coupled to the lens module, and utilized for converting the image information to a frame. The brightness adjusting unit is coupled between the image and signal processor and the lens module, and utilized for generating an exposure control signal to the lens module according to an automatic exposure information and generating a frame rate information to a de-noising unit. The de-noising unit is utilized for performing a spatial domain de-noising process and a time domain de-noising process at least according to the frame rate information and a pixel of the frame, so as to generate an output pixel.
Briefly summarized, the spirit of the present invention is using an adaptivity method to dynamically determine a ratio of the time domain de-noising, and further adding the spatial domain de-noising to achieve a real-time 3D de-noising method.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
In general, in order to obtain a better de-noising effect, the characteristics of the noises have to be analyzed at first. There are two kinds of common static state image noises: salt and pepper noise and Gaussian noise. However, for the general image capturing devices, since the captured images are dynamic and the noises of each frame might be different, and the noises of each point are twinkling constantly for the vision (i.e. the whole frame is full of twinkling noises), the effect of using the spatial domain to perform the de-noising process will not be ideal in this condition, and it is more proper to use the time domain filter or use the time domain plus the spatial domain to perform the de-noising process.
The spirit of the present invention is using an adaptivity method to dynamically determine a ratio of the time domain de-noising, and further adding the spatial domain de-noising to achieve a real-time 3D de-noising method. In the 3D de-noising method, the way of how to allocate the time domain de-noising strength (effect) will directly affect the user feeling. The present invention is suitable for all camera modules and shot environments. In a low light environment, for example, two different time points of captured frame are not only full of static noise, buy also contains dynamic twinkling noises. Therefore, the present invention can reduce the dynamic twinkling noise to enhance the visual perception in as far as possible under the condition of no loss of image details. In addition, the computational cost of the present invention is very low, and the present invention can be used in a variety of different ways of implementation, such as a hardware (such as a chip), a software (such as a driver, an application) or a firmware or a part or all of their combination.
Please refer to
P
out
=P
in
×C
denoising+ƒ3(q)×(1−Cdenoising) (1)
Pin is a value of a pixel in the current frame, and q is a value of another pixel in the corresponding position in the previous frame (co-located pixel), Pout is a result generated by the filtering process (i.e. a new value of the pixel in the current frame). More specifically, an integrated de-noising coefficient Cdenoising is utilized here, and a dynamic determining method is utilized for determining an integrated de-noising coefficient Cdenoising which is most suitable for the pixel. As shown in the equation (1), when the integrated de-noising coefficient Cdenoising is larger, the output value is determined more by the value Pin of the pixel in the current frame. When the integrated de-noising coefficient Cdenoising is smaller, the output value is determined more by the value q of the pixel in the corresponding position in the previous frame. In other words, when the integrated de-noising coefficient Cdenoising in
The above equation (1) can be further represented in equation (2) as follows.
P
out
=P
in׃1(ƒ2(C1,C2, . . . ,Cn))+ƒ3(q)×(1−ƒ1(ƒ2(C1,C2, . . . ,Cn)) (2)
The integrated de-noising coefficient Cdenoising in the equation (1) is represented by ƒ1(ƒ2(C1, C2, . . . , Cn)). The filtering function ƒ1 is a global mapping function, and this function can perform a whole adjustment for the de-noising coefficient. For example, it is practical to use the filtering function ƒ1 to perform a global gain process for an input to directly change the input strength according to the characteristics of the lens and/or the light sensing element, and generate an output to obtain the stable effect and prevent from affected by different lens, and the present invention is not limited to this condition. If the output of the filtering function ƒ1 is larger than the input, it means that the filtering function ƒ1 increases the input strength. If the output of the filtering function ƒ1 is smaller than the input, it means that the filtering function ƒ1 decreases the input strength.
Please refer to
In the step 302 in
In the step 304, the motion adjustment is performed according to the brightness based on Weber-Fechner Law. Weber-Fechner Law applied to image processing can get the following conclusion: for a fixed size of noise, in the place of the higher brightness, the noise is harder to be paid attention by the human's eyes, and in the place of the lower brightness, the noise is easier to be paid attention by the human's eyes. Thus, according to the above conclusion, a dynamic Weber threshold value thdweber is designed in the step 304, wherein thdweber
In the step 306, a motion strength Difference between the current frame and the previous k (k=1˜n) frame is calculated. When the motion strength Difference is larger, it means that the motion level is higher, and when the motion strength Difference is smaller, it means that the motion level is lower. The motion strength Difference is defined as follows:
*is a representative of the rotating calculation, and pi,j is a representative of a current pixel of coordinate position (i,j), and qi,j is a representative of a pixel of coordinate position (i,j) in a previous frame.
is a representative of together with the surrounding pixels to process the pixels into calculation in order to reduce the error.
is a representative of a specific process for together with the surrounding pixels to process the pixels. For example, when the Gauss coefficient is used,
That is, higher weights are allocated for the pixels to process in the middle, and lower weights are allocated for the surrounding pixels. There are details about process of filling or image for the edge or corner pixels. The details are all well known to those of average skill in this art, and thus further explanation of the details and operations are omitted herein for the sake of brevity.
As mentioned above, when the motion strength Difference is larger, it means that the motion level is higher, and it means that the pixel tends to not need filtering process in time domain to reduce the side effects of the ghosting, and thus the corresponding filtering coefficient is larger. When the motion strength Difference is smaller, the corresponding filtering coefficient is smaller. A first dynamic threshold value thddynamic1 is obtained by adding the skin color threshold value thdskin, the Weber threshold value thdweber
thd
dynamic1
=thd1+thdskin+thdweber (4)
thd
dynamic2
=thd2+thdskin+thdweber (5)
The first predetermined threshold value thd1 and the second predetermined threshold value thd2 can be optimal values adjusted are according to the use of the lens and/or light sensing element. Next, a preposed de-noising coefficient Cpre
In the step 308, a distance between the pixel in the current frame and the center point of the frame is calculated (i.e. Distance Condition). The purpose of the step 308 is to adjust the coefficient obtained in the step 306 according to the distance between the pixel in the current frame and the center point of the frame. In general, if the pixel is farther from the center point of the frame, the pixel will be affected by the lens shading more seriously, and thus a bigger gain is required to amplify the pixel value, which results in the pixel farther from the center point of the frame has more serious noises than the center point of the frame. Thus, the pixel farther from the center point of the frame needs stronger filtering to improve the above noises. Since the pixel farther from the center point of the frame does not belong to the images of attention due to its position, the caused side effect of the ghosting effect is less easy to be detected. When the pixel is closer to the center point of the frame, the filtering strength is weaker. In this way, in the step 308, the corresponding adjusting coefficient R is obtained according to the information of the distance from the center point of the frame, to adjust the preposed de-noising coefficients Cpre
Distance=√{square root over ((Px−Cx)2−(Py−Cy)2)}{square root over ((Px−Cx)2−(Py−Cy)2)} (6)
Px is X coordinate of the current pixel, and Py is Y coordinate of the current pixel, and Cx is X coordinate of the current pixel, and Cy is Y coordinate of the current pixel. As shown in
C
k
=C
pre
k
*R (7)
However, the lens shading compensation method utilized by the present invention is not limited to the embodiment in
In the step 310, the individual de-noising coefficients Ck (k=1˜n) are put in the equation (2) to obtain the result Pout. Please refer to the above paragraphs for the details.
Please refer to
thd
dynamic1
=thd1+thdskin+thddist+thdweber (8)
thd
dynamic2
=thd2+thdskin+thddist+thdweber (9)
A distance threshold value thddist calculated in the step 804 is increased. Thus, provided that substantially the same result is achieved, the steps of the real-time adaptability 3D dynamic de-noising method flowchart do not have to be in the specific order, and these are all fall within the scope of the present invention.
In general, in a low brightness environment, the received the pixels will be multiplied by a bigger gain before processed by the real-time adaptability 3D dynamic de-noising method of the present invention, and thus the noises will be amplified synchronously and particularly apparent. Thus, the strength of noise filtering has to be relatively increased in this condition. On the contrary, if the environmental brightness is enough, the noise is not obvious, so in this case the strength of the noise filtering should be relatively reduced, otherwise it may affect the image clarity or cause other side effects. The present invention can make optimization of adjustment according to the ambient light and brightness. In another embodiment, the steps 802, 804, and 806 in
Please refer to
For the de-noising unit 908, in order to obtain the environment light source and the environment brightness to achieve the optimal de-noising effect, the frame rate information Cfps can be utilized to derive the environment light source and the environment brightness. Specifically, when the environment brightness is brighter, the frame rate information Cfps will be higher. When the environment brightness is darker, the brightness adjusting unit 910 will actively increase the exposure time of the sensor 904 to lower the frame rate information Cfps. In other words, when the environment brightness is brighter, the frame rate information Cfps is higher than that when the environment brightness is darker.
The de-noising noise unit 908 can only use the real-time adaptability 3D dynamic de-noising method in
P
new
out
=P
in
×α+P
out×(1−α) (8)
α is between 0 and 1, and used to determine the strength of the de-noising effect. The calculation of α is as follows:
α=ƒ4(Cfps) (9)
ƒ4 is a monotone increasing function. When the frame rate information Cfps is higher, a is bigger, and the optimal output Pnew
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
103121757 | Jun 2014 | TW | national |