The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
The present invention will now be described more fully with reference to the accompanying drawings in which exemplary embodiments of the invention are shown.
The false contour detection unit 210 includes a contour detector 212 and a false contour separator 214.
The contour detector 212 removes flat areas from an input image using a difference between the input image and an image obtained by reducing the number of bits of the input image, and detects a contour area in the input image. The contour area comprises not only a false contour area but also an edge area.
The false contour separator 214 separates a false contour area and an edge area from the contour area obtained by the contour detector 212, and generates information (hereinafter referred to as false contour direction information) indicating the direction of the false contour area and information (hereinafter referred to as false contour location information) indicating the location of the false contour area.
The operation of the false contour detection unit 210 will be described later in further detail with reference to
The false contour area expansion unit 220 includes a structural element generator 224 and a calculator 226.
The structural element generator 224 generates a structural element that is needed to expand a false contour area.
The calculator 226 expands a false contour area, according to the size and shape of the structural element generated by the structural element generator 224, by performing a binary morphology dilation operation. If the structural element generated by the structural element generator 224 is circular, a false contour area is expanded to be as large as a circular mask by performing a binary morphology dilation operation.
Structural elements and a binary morphology dilation operation are obvious to one of ordinary skill in the art to which the present invention pertains, and thus detailed descriptions thereof will be skipped.
The false contour removal unit 230 determines a smoothing mask weight according to the distance from a pixel in the mask to a center pixel of a false contour, determines an edge preservation mask weight according to the contrast with the center pixel of the false contour, and removes the false contour by performing filtering using the smoothing mask weight and the edge preservation mask weight.
The operation of the false contour removal unit 230 will be described later in further detail with reference to
In operation 406, a structural element is generated, and the false contour area is expanded according to the size of the structural element by performing a binary morphology dilation operation.
In operation 408, a smoothing mask weight is determined according to the distance from a pixel in the mask to a center pixel of the false contour area an edge preservation mask weight is determined according to the contrast with the center pixel of the false contour area, and the false contour area is filtered using the smoothing mask weight and the edge preservation mask weight.
According to the present exemplary embodiment, in operation 408, neural networks may be used to remove a false contour, and this will be described later in further detail with reference to
The false contour separator 214 separates a false contour area and an edge area from the contour information C(m,n), and generates false contour direction information and false contour location information.
In detail, the false contour direction information is generated based on the contour information C(m,n) of the input image I(m,n), as indicated by Equation (1):
where Contrastmax indicates a maximum contrast, K indicates the size of a mask in a horizontal direction, and L indicates the size of the mask in a vertical direction. The four components parenthesized in Equation (1) respectively indicate horizontal false contour direction information corresponding to an angle of 0°, vertical false contour direction information corresponding to an angle of 90°, diagonal false contour direction information corresponding to an angle of 135°, and opposite false contour direction information corresponding to an angle of 180°, and are represented as θh, θv, θd, and θad. A minimum contrast Contrastmin is calculated as indicated by Equation (2):
According to the present exemplary embodiment, a non-direction θnondir is added as a type of direction. The non-direction θnondir can be determined to correspond to the situation when the difference between the maximum contrast Contrastmax and the minimum contrast Contrastmin is less than a predefined threshold Th, as indicated by Equation (3):
Contrastmax−Contastmin<Th (3).
Thereafter, a false contour area and an edge area are separated from the contour information C(m,n) according to whether the maximum contrast Contrastmax (hereinafter referred to as the maximum contrast Cm(m,n)) is less than a predefined threshold T. In other words, an area where the maximum contrast Cm(m,n) is larger than the predefined threshold T is determined as an edge area, and an area where the maximum contrast Cm(m,n) is less than the predefined threshold T is determined as a false contour area. In this manner, false contour direction information θ(m,n) and false contour location information Bf(m,n) can be obtained.
However, the present invention is not restricted to the false contour detection method set forth herein.
The weight determiner 602 determines a smoothing mask weight ws according to the distance from a pixel in the mask to a center pixel of a false contour area, and an edge preservation mask weight wep according to the contrast between a pixel in the mask and with the center pixel of the false contour area.
A weight function used to define each of the smoothing mask weight ws and the edge preservation mask weight wep may be defined by Equation (4):
where d indicates an input variable, and D indicates the size in pixels of a mask. A brightness difference or a distance may be used as the input variable d, but the present invention is not restricted thereto.
A second order weight function can be obtained by combining two first order weight functions, as indicated by Equation (5):
In this manner, an n-th order weight function can be generalized as indicated by Equation (6):
By using the n-th order weight function, a weight for an n-th input variable d=(d1, . . . , dn) can be determined. The width of the n-th order weight function can be determined according to the mask size D=(D1, . . . , Dn). The first order weight function can be used for determining a weight according to the contrast between areas in a black-and-white image or determining a weight for a moving image according to the passage of time. The second order weight function can be used for determining a weight according to a distance between areas in an image. The third order weight function can be used for determining a weight according to a difference between the colors of areas in a color image.
However, the present invention is not restricted to the weight functions set forth herein.
The determination of a smoothing mask weight and an edge preservation mask weight using a weight function will hereinafter be described in further detail.
Assuming that a center pixel of a false contour area is x=(x1, x2) and a neighbor pixel in a smoothing mask is ξ=(ξ1,ξ2), a smoothing mask weight ws can be determined using a second order weight function, as indicated by Equation (7):
w
s(ξ,x)=w2(ξ−x,M) (7)
where M=(M1, M2) indicates a parameter that is needed to determine the smoothing mask weight ws. Since the width of a weight function is the same as the size of a smoothing mask, the smoothing mask weight ws has a value of 0 outside the smoothing mask. As the size of the smoothing mask increases, false contours that are distant from each other can be more effectively removed. However, the larger the smoothing mask, the more likely it is to blur an image. Thus, there is the need to appropriately determine the smoothing mask weight ws.
An edge preservation mask weight wep is determined according to the contrast between the center pixel x and the neighbor pixel ξ using a first order weight function, as indicated by Equation (8):
w
ep(ξ,x)=w1(ΔI,ΔI)
ΔI=I(ξ)−I(x) (8)
where Δi indicates the contrast between the center pixel x and the neighbor pixel ξ, and ΔI indicates a parameter that is needed to determine the edge preservation mask weight wep and is determined based a maximum contrast detected in a false contour area by a user. If Δl is smaller than ΔI, an edge preservation mask considers the neighbor pixel ξ when performing filtering. However, if ΔI is smaller than Δl, the edge preservation mask does not consider the neighbor pixel ξ when performing filtering. In this manner, false contours can be effectively removed while preserving edge areas.
The brightness of each pixel of a black-and-white image is represented by a single value, and thus, a weight for a black-and-white image can be determined in the aforementioned manner. On the other hand, the brightness of each pixel of a color image is represented by three values, i.e., R, G, and B, and thus, a weight for a color image can be determined using a third order weight function, as indicated by Equation (9):
w
ep(ξ,x(=w3(ΔI,ΔIfx)
ΔI=I(ξ)−I(x) (9)
where ΔI is a color plane vector indicating the contrast between the center pixel x and the neighbor pixel ξ, and I(x) indicates the brightness of a color image. The brightness I(x) may be represented by a value of a YCbCr plane or a value of a CIE L*a*b* plane as well as a value of an RGB plane. The parameter ΔI in Equation (8) is replaced by a vector ΔIfx in Equation (9).
Once the smoothing mask weight ws and the edge preservation mask weight wep are determined in the aforementioned manner, a false contour is removed by performing filtering on a false contour area using a weight that is obtained by multiplying the smoothing mask weight ws by the edge preservation mask weight wep and normalizing the result of the multiplication. This type of filtering is referred to as bilateral filtering, and is indicated by Equation (10):
were Nx indicates a mask whose center is x.
However, the present invention is not restricted to bilateral filtering. In other words, a variety of filtering methods other than a bilateral filtering method may be used.
The removal of false contours using a bilateral filtering method has been described in detail so far. Hereinafter, the removal of false contours using neural networks will be described in detail.
An original input image I(m,n), an image If(m,n) including false contours, and false contour location information Bf(m,n) and false contour direction information θ(m,n) provided by the false contour detection unit 210 are input to the neural network unit 810. Pixels where a false contour is detected are represented by the equation Bf(m,n)=1, and pixels where no false contour is detected are represented by the equation Bf(m,n)=0. A weight can be determined by learning of the neural network learning unit 810, as indicated by Equation (11):
W=[W(1),W(2),W(3),W(4),W(5),W(6),W(7),W(8)] (11)
where W(k) (1≦k=└θ(x,y)/45┘+1≦8) indicates the weight determined by the learning of the neural network unit 810, and └α┘ indicates the closest integer smaller than α.
Each of the eight neural networks corresponding to the respective false contour directions comprises an input layer consisting of L nodes, a hidden layer consisting of M nodes, and an output layer consisting of N nodes. An input to each of the eight neural networks is obtained from a location in the image If(m,n) corresponding to a pixel in a mask that comprises L pixels surrounding a pixel where a false contour is detected and a target value is obtained from a location in the original input image I(m,n) corresponding to a center pixel of the mask.
Referring to
Here, the expansion distance may be 10 or greater in an another exemplary embodiment.
Referring to
where d1 or d2 indicates a distance between a pixel incorporated into an expanded false contour filtering area and a pixel where a false contour is detected, D1 or D2 indicates the length in pixels by which a false contour filtering area is expanded, and r indicates the number of iterations of processing of a pixel during the expansion of a false contour filtering area and is equal to 0 for pixels that are processed for a first time. Equations (12) through (15) respectively correspond to pairs of false contour directions illustrated in
d1=|i| or |j|, and d2=|i| or |j| (16)
where i indicates a horizontal pixel distance, and j indicates a vertical pixel distance.
The lengths D1 and D2 can be defined by Equation (17):
D1=X or Y
D2=X or Y (17)
where X indicates the length in a horizontal direction by which a false contour filtering area is expanded, and Y indicates the length in a vertical direction by which a false contour filtering area is expanded.
Referring to
Referring to
where ci1 indicates a value obtained from an intermediate calculation process performed by a neural network. The value ci1 can be defined by Equation (19):
where the superscript of 1 in wj,i1(k) indicates a location of layer, the subscripts of i and j in wj,i1(k) indicate a location of node in two consecutive layers, bi1 indicates a bias, the superscript of 1 in bi1 indicates location of layer, and the subscript of i in bi1 indicates a location of node.
A false contour removal filter 924 applies an adaptive one-dimensional (1D) directional smoothing filter to the image If′(m,n) provided by the weight applicator 922, and outputs an image Î(m, n) as a result of final false contour removal. Here, the false contour removal filter 924 is not restricted to an adaptive 1D directional smoothing filter, and this will hereinafter be described in detail.
The false contour removal filter 924 uses the false contour direction information θ(m,n), the false contour location information Bf(m,n), the contour information C(m,n), and the filtering area expansion information Bd(m,n) to perform filtering in a direction perpendicular to a false contour direction indicated by the false contour direction information θ(m,n). If the false contour removal filter 924 is a 9-tap smoothing filter, an adaptive 1D directional smoothing filter coefficient h(n) may be defined by Equation (20):
The false contour removal filter 924 performs filtering using the adaptive 1D directional smoothing filter coefficient h(n), thereby obtaining the image Î(m, n). The present invention is not restricted to a 9-tap smoothing filter. In other words, a 5-tap or 7-tap smoothing filter coefficient may be selectively applied to the present invention.
neural networks according to an exemplary embodiment of the present invention. Referring to
In operation 1204, a false contour filtering area is expanded using the false contour location information, the false contour direction information, and contour information.
In operation 1206, a weight is applied using the weight obtained in operation 1202, the false contour location information, the false contour direction information, and the image containing false contours.
In operation 1208, a false contour is removed from a false contour area by performing adaptive 1D smoothing filtering using the false contour location information, the false contour direction information, the contour information, and filtering area expansion information.
The present invention can be realized as computer-readable code embodied on a computer-readable recording medium. The computer-readable recording medium may be any type of recording device in which data is stored in a computer-readable manner. Non-limiting examples of the computer-readable recording medium include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disc, an optical data storage, and a carrier wave (e.g., data transmission through the Internet). The computer-readable recording medium may be distributed over a plurality of computer systems connected to a network so that computer-readable code is written thereto and executed therefrom in a decentralized manner. Functional programs, code, and code segments needed for realizing the present invention can be easily construed by one of ordinary skill in the art.
According to the present invention, it is possible to remove false contours even when it is unknown what has caused the false contours, by detecting a false contour area candidate and performing false contour removal only on the detected false contour area candidate. In addition, according to the present invention, it is possible to enhance the quality of images by performing filtering while preserving edges in an original input image and precisely performing pixel-based processes through neural network learning.
While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2006-0052872 | Jun 2006 | KR | national |