1. Field of the Invention
The invention relates to digital image processing, and more particularly to determining interpolating direction for color demosaicking.
2. Description of the Related Art
Most digital cameras or image sensors use a color filter array, such as the well-known Bayer color filter array, to capture a digital image in order to reduce costs. In this way, each pixel in the captured image has only one measured color. This kind of image is called a mosaic image.
Color demosaicking is a process to estimate information of missing two colors for each pixel. For example, some color demosaicking methods use bilinear interpolation to estimate information of missing two colors for each pixel. In bilinear interpolation, unknown information of the two colors is calculated based on an average of values of neighboring pixels in a vertical, horizontal and/or diagonal direction. However, artifacts, such as zipper effect or false color, might occur in the demosaicked image after color demosaicking, reducing image quality. The zipper effect makes a straight edge in the image look like a zipper. Effects on a part of the artifacts can be diminished by preventing interpolation across edges. Therefore, determining interpolating direction for color demosaicking is an important issue.
In view of the above, the invention provides a method for determining interpolating direction for color demosaicking. In one embodiment, steps of the method comprise: obtaining edge information of each pixel of an image captured by a color filter array; determining a highly horizontal level and a highly vertical level of each pixel; and determining an interpolating direction of each pixel based on the edge information, the highly horizontal level and the highly vertical level of the pixel.
In another embodiment, the invention provides a non-transitory machine-readable storage medium, having an encoded program code, wherein when the program code is loaded into and executed by a machine, the machine implements a method for determining interpolating direction for color demosaicking, and the method comprises: obtaining edge information of each pixel of an image captured by a color filter array; determining a highly horizontal level and a highly vertical level of each pixel; and determining an interpolating direction of each pixel based on the edge information, the highly horizontal level and the highly vertical level of the pixel.
In still another embodiment, the invention provides an apparatus for determining interpolating direction for color demosaicking, comprising: an input module, receiving an image captured by a color filter array; an edge sensing module coupled to the input module, obtaining edge information of each pixel of the image; a direction level evaluating module coupled to the input module, determining a highly horizontal level and a highly vertical level of each pixel; a direction determining module coupled to the edge sensing module and the direction level evaluating module, determining an interpolating direction of each pixel based on the edge information, the highly horizontal level and the highly vertical level of the pixel; and an output module coupled to the direction determining module, outputting the interpolating direction of each pixel
A detailed description is given in the following embodiments with reference to the accompanying drawings.
The invention can be more fully understood by reading the subsequent detailed description and examples with references made to the accompanying drawings, wherein:
a and
The following description is of the best-contemplated mode of carrying out the invention. This description is made for the purpose of illustrating the general principles of the invention and should not be taken in a limiting sense. The scope of the invention is best determined by reference to the appended claims.
In step S201, edge information of each pixel captured by a color filter array is obtained. In one example, the edge information includes a vertical edge variation, a horizontal edge variation, a first diagonal edge variation and a second diagonal edge variation. In an image, there are edges where the light intensity (luminance) changes sharply, such as a boundary of an object in the image. The edge information herein is to estimate whether a pixel lies on an edge, and if so, to determine the direction of the edge. The vertical edge variation represents a vertical luminance gradient of a pixel. The horizontal edge variation represents a horizontal luminance gradient of a pixel. The first diagonal edge variation represents a northeast-southwest luminance gradient of a pixel. The second diagonal edge variation represents a northwest-southeast luminance gradient of the pixel. The vertical edge variation, the horizontal edge variation, the first diagonal edge variation and the second diagonal edge variation will be described in detail later. To be noted, in the specification, vertical is equal to north-south and horizontal is equal to east-west.
In step S202, a highly horizontal level and a highly vertical level of each pixel are determined. The highly horizontal level represents the possibility that the pixel is in a region where luminance values of pixels fluctuate in a vertical direction. The highly vertical level represents the possibility that the pixel is in a region where luminance of pixels fluctuate in a horizontal direction. If a highly horizontal level of a pixel is large, it is much more possible that the pixel is in a region with horizontal stripes. On the other hand, if a highly vertical level of a pixel is large, it is much more possible that the pixel is in a region with vertical stripes. The highly horizontal level and the highly vertical level will be described in detail later.
In step S203, an interpolating direction of each pixel is determined based on the edge information, the highly horizontal level and the highly vertical level. The detail of determining the interpolating direction based on the edge information, the highly horizontal level and the highly vertical level will be described later.
wherein a is a user-defined positive integer and L(Px, y) is luminance of a pixel Px, y at (x, y).
Based on the formula described above, take a=1 as an example, as shown in
V=|L(Pi−1,j+2)−L(Pi−1,j)|+|L(Pi,j+2)−L(Pi,j)|+|L(Pi+1,j+2)−L(Pi+1,j)|+|L(Pi−1,j−2)−L(Pi−1,j)|+|L(Pi,j−2)−L(Pi,j)|+|L(Pi+1,j−2)−L(Pi+1,j)|.
According to the Bayer pattern, pixels Pi−1,j+2, Pi−1,j and Pi−1,j−2 have the same color; pixels Pi,j+2, Pi,j and Pi,j−2 have the same color; and pixels Pi+1,j+2, Pi+1,j and Pi+1,j−2 have the same color. Therefore, the vertical edge variation V is calculated based on luminance gradients of the same color. It should be noted that another color filter array may be used instead of the Bayer color filter array described herein.
Similar to the vertical edge variation V, the horizontal edge variation H of the pixel Pi,j at (i,j) is obtained from the following formula:
Based on the formula described above, take a=1 as an example, as shown in
D1=|L(Pi+1,j+2)−L(Pi−1,j)|+|L(Pi+2,j+2)−L(Pi,j)|+|L(Pi+3,j+2)−L(Pi+1,j)|+|L(Pi−3,j−2)−L(Pi−1,j)|+|L(Pi−2,j−2)−L(Pi,j)|+|L(Pi−1,j−2)−L(Pi+1,j)|.
According to the Bayer pattern, pixels Pi+i,j+2, Pi−1,j and Pi−3,j−2 have the same color; pixels Pi+2,j+2, Pi,j and Pi−2,j−2 have the same color; and pixels Pi+3,j+2, Pi+i,j and Pi−1,j−2 have the same color. Therefore, the first diagonal edge variation D1 is calculated based on luminance gradients of the same color.
Similar to the first diagonal edge variation D1, the second diagonal edge variation D2 of the pixel Pi,j at (i,j) is obtained from the following formula:
As described above, after step S201, each pixel has the vertical edge variation V, the horizontal edge variation H, the first diagonal edge variation D1 and the second diagonal edge variation D2. To be noted, although the user-defined positive integers in formulas of the vertical edge variation V, the horizontal edge variation H, the first diagonal edge variation D1 and the second diagonal edge variation D2 are all indicated as a, in practical four formulas can use different values of user-defined positive integers.
In one example, the highly vertical level HVL is determined according to the following codes:
As described above, the highly vertical level HVL of the pixel C represents the possibility that the pixel C is in a region where luminance values of pixels fluctuate in a horizontal direction. There are two situations that the pixel C is in the region where luminance values of pixels fluctuate in a horizontal direction. The first situation is that the trend of the luminance of the pixels from the pixel A to the pixel E is high-low-high-low-high. The second situation is that the trend of the luminance of the pixels from the pixel A to the pixel E is low-high-low-high-low. The parameter cnt1 represents the first situation while the parameter cnt0 represents the second situation. If ctn0 is larger than cnt1, the highly vertical level HVL of the pixel C is ctn0, which means the pixel C is more possible in the second situation than in the first situation.
The condition “LA>LB” in the above codes is true when luminance of the pixel A is larger than luminance of the pixel B. In another example, the condition “LA>LB” can be designed to be true when luminance of the pixel A and luminance of the pixel B are both larger than a pre-determined threshold and luminance of the pixel A is larger than luminance of the pixel B.
Similarly,
In one example, the highly horizontal level HHL is determined according to the following codes:
As described above, the highly horizontal level HHL of the pixel C′ represents the possibility that the pixel C′ is in a region where luminance values of pixels fluctuate in a vertical direction. There are two situations that the pixel C′ is in the region where luminance values of pixels fluctuate in a vertical direction. The first situation is that the trend of the luminance of the pixels from the pixel A′ to the pixel E′ is high-low-high-low-high. The second situation is that the trend of the luminance of the pixels from the pixel A′ to the pixel E′ is low-high-low-high-low. The parameter cnt1′ represents the first situation while the parameter cnt0′ represents the second situation. If ctn0′ is larger than cnt1′, the highly horizontal level HHL of the pixel C′ is ctn0′, which means the pixel C is more possible in the second situation than in the first situation.
As described above, since the highly vertical level HVL and the highly horizontal level HHL are calculated based on luminance of the pixels which are not necessarily the same color, using the highly vertical level HVL and the highly horizontal level HHL to decide the interpolating direction can improve performance especially when information of the same color is not enough, such as for one-pixel-wide stripes in the image.
After step S202, the highly vertical level HVL and the highly horizontal level HHL of each pixel is obtained. Then an interpolating direction of each pixel is determined based on edge information V, H, D1 and D2, the highly vertical level HVL and the highly horizontal level HHL in step S203, as shown in
If all edge information, that is, edge variations V, H, D1 and D2, are larger than a first predetermined threshold T1 (step S701: Yes), the interpolating direction of the pixel is not obvious, then the interpolating direction of the pixel is flat (step S702). In color demosaicking, “flat” means no particular interpolating direction is used when interpolating. If all edge variations V, H, D1 and D2 of a pixel are large, the pixel might be in a region with complicated texture. Therefore, the interpolating direction of the pixel is flat. If not all the edge variations V, H, D1 and D2 are larger than the first predetermined threshold T1 (step S701: No), then whether the difference between the second minimum edge variation and the minimum edge variation is larger than a second predetermined threshold T2 is checked (step S703). If the difference between the second minimum edge variation and the minimum edge variation is larger than the second predetermined threshold T2 (step S703: Yes), the interpolating direction is a corresponding direction of corresponding luminance gradient of the minimum edge variation. If the difference between the second minimum edge variation and the minimum edge variation is larger than the second predetermined threshold T2, the minimum edge variation is significantly smaller than the other edge variations. Therefore, the change of luminance in the corresponding direction of the minimum edge variation is much smoother than in the other directions, and interpolating information of the other two colors along the corresponding direction of the minimum edge variation may avoid interpolating across an edge on which the pixel might lie. For example, if H is the second minimum edge variation, V is the minimum edge variation and the difference between H and V, that is, (H−V), is larger than T2, the interpolating direction of the pixel is vertical.
If the difference between the second minimum edge variation and the minimum edge variation is not larger than the second predetermined threshold T2 (step S703: No), then two conditions are checked to determine whether the interpolating direction is to be determined according to the highly horizontal level HHL and the highly vertical level HVL or not (step S705). The first condition is that both the highly horizontal level HHL and the highly vertical level HVL are not larger than a third predetermined threshold T3, and the second condition is that the highly horizontal level HHL is equal to the highly vertical level HVL. If both conditions are not met (step S705: No), the interpolating direction is determined according to the highly horizontal level HHL and the highly vertical level HVL (Step S706). For example, if HHL is larger than T3 and HVL is smaller than T3, the two conditions are both not met. In this example, since HHL is larger, the interpolating direction is horizontal. If one of the conditions is met (step S705: Yes), the interpolating direction is determined not according to the highly horizontal level HHL and the highly vertical level HVL but according to the vertical edge variation V and the horizontal edge variation H, as shown in steps S707˜S709.
If one of the two conditions is met (step S705: Yes), then whether the vertical edge variation V is equal to the horizontal edge variation H is checked (S707). If the vertical edge variation V is equal to the horizontal edge variation H (step S707: Yes), the interpolating direction is flat (step S708). If the vertical edge variation V is not equal to the horizontal edge variation H (step S707: No), the interpolating direction is a direction of a luminance gradient of the smaller one of the vertical edge variation V and the horizontal edge variation H (step S709). For example, in step S709, if the horizontal edge variation H is smaller than the vertical edge variation V, the interpolating direction is horizontal. All the thresholds T1, T2 and T3 may be determined based on sharpness measurements and human vision. Frequency response of a standard test image may be considered when determining the thresholds. For example, some image analyzing software, such as imatest and ImageJ, may be utilized to decide the thresholds.
The interpolating direction of each pixel determined by steps in
In another embodiment of determining an interpolating direction of each pixel, first the vertical edge variation V and the horizontal edge variation H are modified respectively as following:
The highly vertical level HVL and the highly horizontal level HHL are used as weightings to modify the vertical edge variation V and the horizontal edge variation H. Then the interpolating direction is a direction of a luminance gradient of the smaller one of the modified vertical edge variation V′ and the modified horizontal edge variation H′. For example, if the modified horizontal edge variation H′ is smaller than the modified vertical edge variation V′, the interpolating direction is horizontal.
In another embodiment, after determining interpolating directions of all pixels, the consistency of the interpolating directions is checked.
b illustrates an example of checking the consistency of the pixel PC and its neighboring pixels PL′ and PN′, wherein pixels PC, PL′ and PN′; are not in the same color in a Bayer pattern. If the interpolating direction of the pixel PC is horizontal or vertical, whether one of pixels PL′ and PN′ has the same interpolating direction as the pixel PC is checked. If one of pixels PL′ and PN′ has the same interpolating direction as the pixel PC, the interpolating direction of the pixel PC is trustworthy, and the interpolating direction of the pixel PC can't be changed when implementing color demosaicking on the pixel PC. If none of pixels PL′ and PN′ has the same interpolating direction as the pixel PC, the interpolating direction of the pixel PC can be changed by a built-in method for determining interpolating direction of a color demosaicking method.
The method for determining interpolating direction for color demosaicking described above may take the form of a program code (i.e., instructions) embodied on a non-transitory machine-readable storage medium such as floppy diskettes, CD-ROMS, hard drives, firmware, or any other machine-readable storage medium. When the program code is loaded into and executed by a machine, such as a digital imaging device or a computer, the machine implements the method for determining interpolating direction for color demosaicking.
The apparatus 90 comprises an input module 910, an edge sensing module 920, a direction level evaluating module 930, a direction determining module 940, a consistency checking module 950 and an output module 960. All modules may be general-purpose processors. The input module 910 receives an image IMG captured by a color filter array, such as a Bayer color filter array. The edge sensing module 920 is coupled to the input module and obtains edge information of each pixel of the image as described in step S201 of
Methods and apparatuses of the present disclosure, or certain aspects or portions of embodiments thereof, may take the form of a program code (i.e., instructions) embodied in media, such as floppy diskettes, CD-ROMS, hard drives, firmware, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing embodiments of the disclosure. The methods and apparatus of the present disclosure may also be embodied in the form of a program code transmitted over some transmission medium, such as electrical wiring or cabling, through fiber optics, or via any other form of transmission, wherein, when the program code is received and loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing and embodiment of the disclosure. When implemented on a general-purpose processor, the program code combines with the processor to provide a unique apparatus that operates analogously to specific logic circuits.
While the invention has been described by way of example and in terms of preferred embodiment, it is to be understood that the invention is not limited thereto. To the contrary, it is intended to cover various modifications and similar arrangements (as would be apparent to those skilled in the art). Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.